Test Report: Docker_macOS 14695

                    
                      16c8c96838ca145d17ecca8303180c41961a99dd:2022-08-01:25115
                    
                

Test fail (22/289)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (255.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220801164357-13911 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0801 16:43:59.353659   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:45:21.276825   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:47:37.433768   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:47:39.072816   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:39.078727   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:39.088884   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:39.111114   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:39.151287   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:39.233502   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:39.394784   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:39.715885   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:40.356382   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:41.636614   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:44.198959   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:49.321432   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:47:59.564054   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:48:05.123287   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220801164357-13911 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m15.235212241s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220801164357-13911] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-20220801164357-13911 in cluster ingress-addon-legacy-20220801164357-13911
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 16:43:57.781455   17916 out.go:296] Setting OutFile to fd 1 ...
	I0801 16:43:57.781675   17916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:43:57.781681   17916 out.go:309] Setting ErrFile to fd 2...
	I0801 16:43:57.781684   17916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:43:57.781793   17916 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 16:43:57.782337   17916 out.go:303] Setting JSON to false
	I0801 16:43:57.797412   17916 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":6208,"bootTime":1659391229,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 16:43:57.797507   17916 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 16:43:57.818732   17916 out.go:177] * [ingress-addon-legacy-20220801164357-13911] minikube v1.26.0 on Darwin 12.5
	I0801 16:43:57.860743   17916 notify.go:193] Checking for updates...
	I0801 16:43:57.882595   17916 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 16:43:57.903639   17916 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 16:43:57.924706   17916 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 16:43:57.946903   17916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 16:43:57.968982   17916 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 16:43:57.990918   17916 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 16:43:58.061322   17916 docker.go:137] docker version: linux-20.10.17
	I0801 16:43:58.061456   17916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 16:43:58.193842   17916 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-08-01 23:43:58.131295997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 16:43:58.215914   17916 out.go:177] * Using the docker driver based on user configuration
	I0801 16:43:58.237720   17916 start.go:284] selected driver: docker
	I0801 16:43:58.237748   17916 start.go:808] validating driver "docker" against <nil>
	I0801 16:43:58.237771   17916 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 16:43:58.241184   17916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 16:43:58.374490   17916 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-08-01 23:43:58.312273292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 16:43:58.374613   17916 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0801 16:43:58.374750   17916 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 16:43:58.396526   17916 out.go:177] * Using Docker Desktop driver with root privileges
	I0801 16:43:58.418443   17916 cni.go:95] Creating CNI manager for ""
	I0801 16:43:58.418476   17916 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 16:43:58.418497   17916 start_flags.go:310] config:
	{Name:ingress-addon-legacy-20220801164357-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220801164357-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 16:43:58.440418   17916 out.go:177] * Starting control plane node ingress-addon-legacy-20220801164357-13911 in cluster ingress-addon-legacy-20220801164357-13911
	I0801 16:43:58.482254   17916 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 16:43:58.503268   17916 out.go:177] * Pulling base image ...
	I0801 16:43:58.545427   17916 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 16:43:58.545486   17916 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0801 16:43:58.612662   17916 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 16:43:58.612685   17916 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 16:43:58.628064   17916 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0801 16:43:58.628089   17916 cache.go:57] Caching tarball of preloaded images
	I0801 16:43:58.628544   17916 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0801 16:43:58.671190   17916 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0801 16:43:58.692338   17916 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0801 16:43:58.786628   17916 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0801 16:44:03.415541   17916 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0801 16:44:03.415681   17916 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0801 16:44:04.040121   17916 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0801 16:44:04.040357   17916 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/config.json ...
	I0801 16:44:04.040383   17916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/config.json: {Name:mk83930aeb030672bf8b97e4146477cb30443cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 16:44:04.040659   17916 cache.go:208] Successfully downloaded all kic artifacts
	I0801 16:44:04.040688   17916 start.go:371] acquiring machines lock for ingress-addon-legacy-20220801164357-13911: {Name:mk6cfe4f3c230a805c787c6358c419b28afbab7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:44:04.040780   17916 start.go:375] acquired machines lock for "ingress-addon-legacy-20220801164357-13911" in 85.125µs
	I0801 16:44:04.040803   17916 start.go:92] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220801164357-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220801
164357-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 16:44:04.040847   17916 start.go:132] createHost starting for "" (driver="docker")
	I0801 16:44:04.086849   17916 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0801 16:44:04.087228   17916 start.go:166] libmachine.API.Create for "ingress-addon-legacy-20220801164357-13911" (driver="docker")
	I0801 16:44:04.087277   17916 client.go:168] LocalClient.Create starting
	I0801 16:44:04.087438   17916 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem
	I0801 16:44:04.087510   17916 main.go:134] libmachine: Decoding PEM data...
	I0801 16:44:04.087542   17916 main.go:134] libmachine: Parsing certificate...
	I0801 16:44:04.087634   17916 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem
	I0801 16:44:04.087685   17916 main.go:134] libmachine: Decoding PEM data...
	I0801 16:44:04.087707   17916 main.go:134] libmachine: Parsing certificate...
	I0801 16:44:04.088484   17916 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220801164357-13911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0801 16:44:04.153741   17916 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220801164357-13911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0801 16:44:04.153832   17916 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220801164357-13911] to gather additional debugging logs...
	I0801 16:44:04.153854   17916 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220801164357-13911
	W0801 16:44:04.216512   17916 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220801164357-13911 returned with exit code 1
	I0801 16:44:04.216538   17916 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220801164357-13911]: docker network inspect ingress-addon-legacy-20220801164357-13911: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220801164357-13911
	I0801 16:44:04.216564   17916 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220801164357-13911]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220801164357-13911
	
	** /stderr **
	I0801 16:44:04.216654   17916 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0801 16:44:04.279359   17916 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010488] misses:0}
	I0801 16:44:04.279400   17916 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 16:44:04.279416   17916 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220801164357-13911 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0801 16:44:04.279480   17916 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220801164357-13911 ingress-addon-legacy-20220801164357-13911
	I0801 16:44:04.373955   17916 network_create.go:99] docker network ingress-addon-legacy-20220801164357-13911 192.168.49.0/24 created
	I0801 16:44:04.374008   17916 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220801164357-13911" container
	I0801 16:44:04.374121   17916 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0801 16:44:04.436645   17916 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220801164357-13911 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220801164357-13911 --label created_by.minikube.sigs.k8s.io=true
	I0801 16:44:04.500033   17916 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220801164357-13911
	I0801 16:44:04.500154   17916 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220801164357-13911-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220801164357-13911 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220801164357-13911:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib
	I0801 16:44:04.955596   17916 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220801164357-13911
	I0801 16:44:04.955784   17916 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0801 16:44:04.955801   17916 kic.go:179] Starting extracting preloaded images to volume ...
	I0801 16:44:04.955920   17916 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220801164357-13911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0801 16:44:09.523831   17916 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220801164357-13911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.567700486s)
	I0801 16:44:09.523853   17916 kic.go:188] duration metric: took 4.567937 seconds to extract preloaded images to volume
	I0801 16:44:09.524100   17916 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0801 16:44:09.655003   17916 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220801164357-13911 --name ingress-addon-legacy-20220801164357-13911 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220801164357-13911 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220801164357-13911 --network ingress-addon-legacy-20220801164357-13911 --ip 192.168.49.2 --volume ingress-addon-legacy-20220801164357-13911:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8
	I0801 16:44:10.013058   17916 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220801164357-13911 --format={{.State.Running}}
	I0801 16:44:10.082193   17916 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220801164357-13911 --format={{.State.Status}}
	I0801 16:44:10.158746   17916 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220801164357-13911 stat /var/lib/dpkg/alternatives/iptables
	I0801 16:44:10.292578   17916 oci.go:144] the created container "ingress-addon-legacy-20220801164357-13911" has a running status.
	I0801 16:44:10.292605   17916 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa...
	I0801 16:44:10.432979   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0801 16:44:10.433037   17916 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0801 16:44:10.544258   17916 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220801164357-13911 --format={{.State.Status}}
	I0801 16:44:10.612250   17916 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0801 16:44:10.612685   17916 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220801164357-13911 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0801 16:44:10.729492   17916 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220801164357-13911 --format={{.State.Status}}
	I0801 16:44:10.795717   17916 machine.go:88] provisioning docker machine ...
	I0801 16:44:10.796136   17916 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220801164357-13911"
	I0801 16:44:10.796241   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:10.865086   17916 main.go:134] libmachine: Using SSH client type: native
	I0801 16:44:10.865699   17916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57446 <nil> <nil>}
	I0801 16:44:10.865718   17916 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220801164357-13911 && echo "ingress-addon-legacy-20220801164357-13911" | sudo tee /etc/hostname
	I0801 16:44:10.985178   17916 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220801164357-13911
	
	I0801 16:44:10.985255   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:11.052314   17916 main.go:134] libmachine: Using SSH client type: native
	I0801 16:44:11.052599   17916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57446 <nil> <nil>}
	I0801 16:44:11.052615   17916 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220801164357-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220801164357-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220801164357-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 16:44:11.168758   17916 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 16:44:11.168783   17916 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 16:44:11.168812   17916 ubuntu.go:177] setting up certificates
	I0801 16:44:11.168822   17916 provision.go:83] configureAuth start
	I0801 16:44:11.168894   17916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:11.237071   17916 provision.go:138] copyHostCerts
	I0801 16:44:11.237206   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 16:44:11.237258   17916 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 16:44:11.237270   17916 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 16:44:11.237371   17916 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 16:44:11.237538   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 16:44:11.237570   17916 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 16:44:11.237575   17916 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 16:44:11.237633   17916 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 16:44:11.237737   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 16:44:11.237767   17916 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 16:44:11.237772   17916 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 16:44:11.237825   17916 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 16:44:11.237942   17916 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220801164357-13911 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220801164357-13911]
	I0801 16:44:11.320502   17916 provision.go:172] copyRemoteCerts
	I0801 16:44:11.320552   17916 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 16:44:11.320609   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:11.389360   17916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57446 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa Username:docker}
	I0801 16:44:11.470332   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0801 16:44:11.470391   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 16:44:11.486745   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0801 16:44:11.486808   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
	I0801 16:44:11.503466   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0801 16:44:11.503535   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 16:44:11.520686   17916 provision.go:86] duration metric: configureAuth took 351.84448ms
	I0801 16:44:11.520699   17916 ubuntu.go:193] setting minikube options for container-runtime
	I0801 16:44:11.520840   17916 config.go:180] Loaded profile config "ingress-addon-legacy-20220801164357-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0801 16:44:11.520907   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:11.587912   17916 main.go:134] libmachine: Using SSH client type: native
	I0801 16:44:11.588066   17916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57446 <nil> <nil>}
	I0801 16:44:11.588078   17916 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 16:44:11.699039   17916 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 16:44:11.699061   17916 ubuntu.go:71] root file system type: overlay
	I0801 16:44:11.699177   17916 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 16:44:11.699246   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:11.766724   17916 main.go:134] libmachine: Using SSH client type: native
	I0801 16:44:11.766893   17916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57446 <nil> <nil>}
	I0801 16:44:11.766955   17916 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 16:44:11.887952   17916 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 16:44:11.888127   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:11.956379   17916 main.go:134] libmachine: Using SSH client type: native
	I0801 16:44:11.956656   17916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57446 <nil> <nil>}
	I0801 16:44:11.956671   17916 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 16:44:12.522860   17916 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-01 23:44:11.886683118 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0801 16:44:12.522881   17916 machine.go:91] provisioned docker machine in 1.726721746s
	I0801 16:44:12.522887   17916 client.go:171] LocalClient.Create took 8.435390968s
	I0801 16:44:12.522921   17916 start.go:174] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220801164357-13911" took 8.435483017s
	I0801 16:44:12.522934   17916 start.go:307] post-start starting for "ingress-addon-legacy-20220801164357-13911" (driver="docker")
	I0801 16:44:12.522940   17916 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 16:44:12.523017   17916 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 16:44:12.523071   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:12.591636   17916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57446 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa Username:docker}
	I0801 16:44:12.675491   17916 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 16:44:12.678970   17916 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 16:44:12.678986   17916 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 16:44:12.678998   17916 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 16:44:12.679003   17916 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 16:44:12.679013   17916 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 16:44:12.679114   17916 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 16:44:12.679252   17916 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 16:44:12.679259   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> /etc/ssl/certs/139112.pem
	I0801 16:44:12.679420   17916 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 16:44:12.686181   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 16:44:12.702960   17916 start.go:310] post-start completed in 180.012178ms
	I0801 16:44:12.703458   17916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:12.773026   17916 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/config.json ...
	I0801 16:44:12.773436   17916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 16:44:12.773490   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:12.840931   17916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57446 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa Username:docker}
	I0801 16:44:12.922195   17916 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 16:44:12.926795   17916 start.go:135] duration metric: createHost completed in 8.885718341s
	I0801 16:44:12.926812   17916 start.go:82] releasing machines lock for "ingress-addon-legacy-20220801164357-13911", held for 8.885799354s
	I0801 16:44:12.926886   17916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:12.994207   17916 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 16:44:12.994265   17916 ssh_runner.go:195] Run: systemctl --version
	I0801 16:44:12.994283   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:12.994315   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:13.064819   17916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57446 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa Username:docker}
	I0801 16:44:13.066736   17916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57446 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa Username:docker}
	I0801 16:44:13.146930   17916 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 16:44:13.338402   17916 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 16:44:13.338473   17916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 16:44:13.347770   17916 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 16:44:13.359772   17916 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 16:44:13.427823   17916 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 16:44:13.491642   17916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 16:44:13.558947   17916 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 16:44:13.762424   17916 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 16:44:13.799035   17916 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 16:44:13.858510   17916 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	I0801 16:44:13.858693   17916 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220801164357-13911 dig +short host.docker.internal
	I0801 16:44:13.982836   17916 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 16:44:13.983145   17916 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 16:44:13.987443   17916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 16:44:13.997083   17916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:44:14.064739   17916 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0801 16:44:14.064810   17916 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 16:44:14.093486   17916 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0801 16:44:14.093503   17916 docker.go:542] Images already preloaded, skipping extraction
	I0801 16:44:14.093563   17916 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 16:44:14.121931   17916 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0801 16:44:14.121950   17916 cache_images.go:84] Images are preloaded, skipping loading
	I0801 16:44:14.122019   17916 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 16:44:14.193547   17916 cni.go:95] Creating CNI manager for ""
	I0801 16:44:14.193559   17916 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 16:44:14.193572   17916 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 16:44:14.193588   17916 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220801164357-13911 NodeName:ingress-addon-legacy-20220801164357-13911 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:sy
stemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 16:44:14.193706   17916 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220801164357-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 16:44:14.193795   17916 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220801164357-13911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220801164357-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 16:44:14.193853   17916 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0801 16:44:14.202073   17916 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 16:44:14.202173   17916 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 16:44:14.210186   17916 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0801 16:44:14.222667   17916 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0801 16:44:14.234824   17916 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0801 16:44:14.246990   17916 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0801 16:44:14.250616   17916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 16:44:14.259728   17916 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911 for IP: 192.168.49.2
	I0801 16:44:14.259834   17916 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 16:44:14.259883   17916 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 16:44:14.259922   17916 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/client.key
	I0801 16:44:14.259935   17916 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/client.crt with IP's: []
	I0801 16:44:14.333852   17916 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/client.crt ...
	I0801 16:44:14.333864   17916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/client.crt: {Name:mk5fd352aff18e7cfbabe6e5ff89f3e6a2ce607f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 16:44:14.334146   17916 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/client.key ...
	I0801 16:44:14.334154   17916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/client.key: {Name:mk8c23f2857ee92a629a45679f9fc053f18b3067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 16:44:14.334370   17916 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.key.dd3b5fb2
	I0801 16:44:14.334391   17916 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0801 16:44:14.654353   17916 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.crt.dd3b5fb2 ...
	I0801 16:44:14.654370   17916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.crt.dd3b5fb2: {Name:mk161457d8b3d5073a63e295178cca722c478e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 16:44:14.654664   17916 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.key.dd3b5fb2 ...
	I0801 16:44:14.654673   17916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.key.dd3b5fb2: {Name:mk9530713c52c3ee498e8229ec09ab7adbd4f715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 16:44:14.654863   17916 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.crt
	I0801 16:44:14.655021   17916 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.key
	I0801 16:44:14.655178   17916 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.key
	I0801 16:44:14.655194   17916 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.crt with IP's: []
	I0801 16:44:14.802393   17916 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.crt ...
	I0801 16:44:14.802408   17916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.crt: {Name:mk578d787eb03da32f8d35acec3660e6f0816749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 16:44:14.802693   17916 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.key ...
	I0801 16:44:14.802700   17916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.key: {Name:mkebdb7be8022cf71d5cf66b977178133a14c572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 16:44:14.802954   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0801 16:44:14.802979   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0801 16:44:14.803019   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0801 16:44:14.803037   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0801 16:44:14.803067   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0801 16:44:14.803083   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0801 16:44:14.803099   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0801 16:44:14.803114   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0801 16:44:14.803222   17916 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 16:44:14.803258   17916 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 16:44:14.803266   17916 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 16:44:14.803300   17916 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 16:44:14.803327   17916 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 16:44:14.803357   17916 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 16:44:14.803421   17916 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 16:44:14.803457   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem -> /usr/share/ca-certificates/13911.pem
	I0801 16:44:14.803474   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> /usr/share/ca-certificates/139112.pem
	I0801 16:44:14.803490   17916 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0801 16:44:14.804004   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 16:44:14.821666   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 16:44:14.838260   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 16:44:14.854513   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801164357-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 16:44:14.870895   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 16:44:14.886902   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 16:44:14.903175   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 16:44:14.919505   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 16:44:14.935704   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 16:44:14.955865   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 16:44:14.973583   17916 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 16:44:14.990203   17916 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 16:44:15.002727   17916 ssh_runner.go:195] Run: openssl version
	I0801 16:44:15.007768   17916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 16:44:15.015118   17916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 16:44:15.018690   17916 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 16:44:15.018728   17916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 16:44:15.023400   17916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 16:44:15.030520   17916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 16:44:15.037975   17916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 16:44:15.041711   17916 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 16:44:15.041753   17916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 16:44:15.046603   17916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 16:44:15.053918   17916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 16:44:15.061199   17916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 16:44:15.065280   17916 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 16:44:15.065318   17916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 16:44:15.070355   17916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 16:44:15.077909   17916 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220801164357-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220801164357-13911 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 16:44:15.077995   17916 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 16:44:15.106693   17916 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 16:44:15.113906   17916 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 16:44:15.120803   17916 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 16:44:15.120854   17916 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 16:44:15.127665   17916 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 16:44:15.127693   17916 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 16:44:15.839544   17916 out.go:204]   - Generating certificates and keys ...
	I0801 16:44:18.793047   17916 out.go:204]   - Booting up control plane ...
	W0801 16:46:13.739198   17916 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220801164357-13911 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220801164357-13911 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0801 23:44:15.174347     956 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0801 23:44:18.804508     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0801 23:44:18.805498     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220801164357-13911 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220801164357-13911 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0801 23:44:15.174347     956 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0801 23:44:18.804508     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0801 23:44:18.805498     956 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 16:46:13.739234   17916 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 16:46:14.180226   17916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 16:46:14.189345   17916 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 16:46:14.189400   17916 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 16:46:14.197037   17916 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 16:46:14.197062   17916 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 16:46:14.886966   17916 out.go:204]   - Generating certificates and keys ...
	I0801 16:46:15.443157   17916 out.go:204]   - Booting up control plane ...
	I0801 16:48:10.363167   17916 kubeadm.go:397] StartCluster complete in 3m55.279343785s
	I0801 16:48:10.363237   17916 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 16:48:10.391482   17916 logs.go:274] 0 containers: []
	W0801 16:48:10.391494   17916 logs.go:276] No container was found matching "kube-apiserver"
	I0801 16:48:10.391558   17916 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 16:48:10.419284   17916 logs.go:274] 0 containers: []
	W0801 16:48:10.419296   17916 logs.go:276] No container was found matching "etcd"
	I0801 16:48:10.419354   17916 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 16:48:10.448626   17916 logs.go:274] 0 containers: []
	W0801 16:48:10.448638   17916 logs.go:276] No container was found matching "coredns"
	I0801 16:48:10.448694   17916 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 16:48:10.477144   17916 logs.go:274] 0 containers: []
	W0801 16:48:10.477160   17916 logs.go:276] No container was found matching "kube-scheduler"
	I0801 16:48:10.477238   17916 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 16:48:10.505411   17916 logs.go:274] 0 containers: []
	W0801 16:48:10.505425   17916 logs.go:276] No container was found matching "kube-proxy"
	I0801 16:48:10.505486   17916 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 16:48:10.533304   17916 logs.go:274] 0 containers: []
	W0801 16:48:10.533316   17916 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 16:48:10.533376   17916 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 16:48:10.562471   17916 logs.go:274] 0 containers: []
	W0801 16:48:10.562483   17916 logs.go:276] No container was found matching "storage-provisioner"
	I0801 16:48:10.562543   17916 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 16:48:10.589608   17916 logs.go:274] 0 containers: []
	W0801 16:48:10.589621   17916 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 16:48:10.589633   17916 logs.go:123] Gathering logs for kubelet ...
	I0801 16:48:10.589645   17916 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 16:48:10.629464   17916 logs.go:123] Gathering logs for dmesg ...
	I0801 16:48:10.629477   17916 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 16:48:10.641113   17916 logs.go:123] Gathering logs for describe nodes ...
	I0801 16:48:10.641124   17916 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 16:48:10.690787   17916 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 16:48:10.690798   17916 logs.go:123] Gathering logs for Docker ...
	I0801 16:48:10.690804   17916 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 16:48:10.705683   17916 logs.go:123] Gathering logs for container status ...
	I0801 16:48:10.705697   17916 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 16:48:12.760140   17916 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054378667s)
	W0801 16:48:12.760261   17916 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0801 23:46:14.244949    3432 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0801 23:46:15.430255    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0801 23:46:15.432352    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 16:48:12.760276   17916 out.go:239] * 
	* 
	W0801 16:48:12.760419   17916 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0801 23:46:14.244949    3432 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0801 23:46:15.430255    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0801 23:46:15.432352    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0801 23:46:14.244949    3432 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0801 23:46:15.430255    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0801 23:46:15.432352    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 16:48:12.760438   17916 out.go:239] * 
	* 
	W0801 16:48:12.760997   17916 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 16:48:12.823782   17916 out.go:177] 
	W0801 16:48:12.886948   17916 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0801 23:46:14.244949    3432 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0801 23:46:15.430255    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0801 23:46:15.432352    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0801 23:46:14.244949    3432 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0801 23:46:15.430255    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0801 23:46:15.432352    3432 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 16:48:12.887118   17916 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 16:48:12.887276   17916 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 16:48:12.908779   17916 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220801164357-13911 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (255.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220801164357-13911 addons enable ingress --alsologtostderr -v=5
E0801 16:48:20.046952   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 16:49:00.997529   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220801164357-13911 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.140297048s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 16:48:13.052336   18277 out.go:296] Setting OutFile to fd 1 ...
	I0801 16:48:13.053166   18277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:48:13.053172   18277 out.go:309] Setting ErrFile to fd 2...
	I0801 16:48:13.053176   18277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:48:13.053276   18277 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 16:48:13.074753   18277 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0801 16:48:13.095937   18277 config.go:180] Loaded profile config "ingress-addon-legacy-20220801164357-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0801 16:48:13.095959   18277 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220801164357-13911"
	I0801 16:48:13.095966   18277 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220801164357-13911"
	I0801 16:48:13.096306   18277 host.go:66] Checking if "ingress-addon-legacy-20220801164357-13911" exists ...
	I0801 16:48:13.096869   18277 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220801164357-13911 --format={{.State.Status}}
	I0801 16:48:13.187799   18277 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0801 16:48:13.209129   18277 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0801 16:48:13.230469   18277 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0801 16:48:13.252502   18277 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0801 16:48:13.278554   18277 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0801 16:48:13.278592   18277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0801 16:48:13.278734   18277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:48:13.346975   18277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57446 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa Username:docker}
	I0801 16:48:13.435809   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:13.484543   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:13.484572   18277 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:13.763085   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:13.816757   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:13.816784   18277 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:14.359234   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:14.412360   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:14.412374   18277 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:15.068546   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:15.120866   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:15.120881   18277 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:15.914329   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:15.966611   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:15.966626   18277 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:17.139025   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:17.190415   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:17.190429   18277 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:19.443883   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:19.496803   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:19.496820   18277 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:21.107806   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:21.158765   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:21.158780   18277 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:23.963510   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:24.017007   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:24.017047   18277 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:27.843293   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:27.894757   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:27.894771   18277 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:35.590992   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:35.643378   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:35.643399   18277 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:50.272029   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:48:50.324441   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:48:50.324455   18277 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:18.729402   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:49:18.782757   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:18.782772   18277 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:41.953038   18277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0801 16:49:42.004794   18277 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:42.004826   18277 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-20220801164357-13911"
	I0801 16:49:42.026609   18277 out.go:177] * Verifying ingress addon...
	I0801 16:49:42.049623   18277 out.go:177] 
	W0801 16:49:42.071584   18277 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220801164357-13911" does not exist: client config: context "ingress-addon-legacy-20220801164357-13911" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220801164357-13911" does not exist: client config: context "ingress-addon-legacy-20220801164357-13911" does not exist]
	W0801 16:49:42.071611   18277 out.go:239] * 
	* 
	W0801 16:49:42.076292   18277 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 16:49:42.102607   18277 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220801164357-13911
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220801164357-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a",
	        "Created": "2022-08-01T23:44:09.740889412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-01T23:44:10.004024891Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/hosts",
	        "LogPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a-json.log",
	        "Name": "/ingress-addon-legacy-20220801164357-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220801164357-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220801164357-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220801164357-13911",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220801164357-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220801164357-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220801164357-13911",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220801164357-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfe60697118feb9439a0e46744dbfcc5048d03aa11b0d160ebecc4358502ba9b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57449"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57450"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bfe60697118f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220801164357-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "46231e7e1bc3",
	                        "ingress-addon-legacy-20220801164357-13911"
	                    ],
	                    "NetworkID": "e53cdd9f8d9b848d63920258b671ab8945d5cc40d64eec958714585efa2f50e4",
	                    "EndpointID": "5ade4a9b265e719a943d30a760f7bd177479e98162b457036438260c01368982",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220801164357-13911 -n ingress-addon-legacy-20220801164357-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220801164357-13911 -n ingress-addon-legacy-20220801164357-13911: exit status 6 (430.778111ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 16:49:42.617681   18380 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220801164357-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220801164357-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220801164357-13911 addons enable ingress-dns --alsologtostderr -v=5
E0801 16:50:22.918423   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220801164357-13911 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.056121329s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 16:49:42.677188   18390 out.go:296] Setting OutFile to fd 1 ...
	I0801 16:49:42.677873   18390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:49:42.677879   18390 out.go:309] Setting ErrFile to fd 2...
	I0801 16:49:42.677883   18390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:49:42.677981   18390 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 16:49:42.699629   18390 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0801 16:49:42.721276   18390 config.go:180] Loaded profile config "ingress-addon-legacy-20220801164357-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0801 16:49:42.721310   18390 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220801164357-13911"
	I0801 16:49:42.721321   18390 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220801164357-13911"
	I0801 16:49:42.721863   18390 host.go:66] Checking if "ingress-addon-legacy-20220801164357-13911" exists ...
	I0801 16:49:42.722756   18390 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220801164357-13911 --format={{.State.Status}}
	I0801 16:49:42.811915   18390 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0801 16:49:42.833798   18390 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0801 16:49:42.855757   18390 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0801 16:49:42.855793   18390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0801 16:49:42.855927   18390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220801164357-13911
	I0801 16:49:42.924936   18390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57446 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/ingress-addon-legacy-20220801164357-13911/id_rsa Username:docker}
	I0801 16:49:43.013242   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:43.068508   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:43.068527   18390 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:43.347042   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:43.400255   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:43.400278   18390 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:43.941090   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:43.991673   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:43.991687   18390 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:44.649083   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:44.700302   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:44.700319   18390 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:45.493858   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:45.547335   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:45.547356   18390 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:46.718031   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:46.773350   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:46.773364   18390 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:49.026684   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:49.079134   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:49.079154   18390 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:50.690434   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:50.740775   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:50.740791   18390 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:53.545753   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:53.595409   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:53.595424   18390 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:57.422661   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:49:57.475767   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:49:57.475781   18390 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:50:05.175686   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:50:05.227871   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:50:05.227887   18390 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:50:19.865077   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:50:19.915815   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:50:19.915830   18390 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:50:48.323322   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:50:48.374123   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:50:48.374145   18390 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:51:11.543033   18390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0801 16:51:11.593884   18390 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0801 16:51:11.615713   18390 out.go:177] 
	W0801 16:51:11.636767   18390 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0801 16:51:11.636794   18390 out.go:239] * 
	* 
	W0801 16:51:11.640752   18390 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 16:51:11.661518   18390 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220801164357-13911
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220801164357-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a",
	        "Created": "2022-08-01T23:44:09.740889412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-01T23:44:10.004024891Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/hosts",
	        "LogPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a-json.log",
	        "Name": "/ingress-addon-legacy-20220801164357-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220801164357-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220801164357-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220801164357-13911",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220801164357-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220801164357-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220801164357-13911",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220801164357-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfe60697118feb9439a0e46744dbfcc5048d03aa11b0d160ebecc4358502ba9b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57449"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57450"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bfe60697118f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220801164357-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "46231e7e1bc3",
	                        "ingress-addon-legacy-20220801164357-13911"
	                    ],
	                    "NetworkID": "e53cdd9f8d9b848d63920258b671ab8945d5cc40d64eec958714585efa2f50e4",
	                    "EndpointID": "5ade4a9b265e719a943d30a760f7bd177479e98162b457036438260c01368982",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220801164357-13911 -n ingress-addon-legacy-20220801164357-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220801164357-13911 -n ingress-addon-legacy-20220801164357-13911: exit status 6 (485.906103ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 16:51:12.232159   18490 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220801164357-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220801164357-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220801164357-13911
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220801164357-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a",
	        "Created": "2022-08-01T23:44:09.740889412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-01T23:44:10.004024891Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/hosts",
	        "LogPath": "/var/lib/docker/containers/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a/46231e7e1bc32c0d16288a0438c48f9b942f13ab6fdfeb5ce07c87679d6b7a4a-json.log",
	        "Name": "/ingress-addon-legacy-20220801164357-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220801164357-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220801164357-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55392adaa3ed627ee75f3634715b61c00d15a6995f6f1d53887a31d50db85c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220801164357-13911",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220801164357-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220801164357-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220801164357-13911",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220801164357-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfe60697118feb9439a0e46744dbfcc5048d03aa11b0d160ebecc4358502ba9b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57449"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57450"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bfe60697118f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220801164357-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "46231e7e1bc3",
	                        "ingress-addon-legacy-20220801164357-13911"
	                    ],
	                    "NetworkID": "e53cdd9f8d9b848d63920258b671ab8945d5cc40d64eec958714585efa2f50e4",
	                    "EndpointID": "5ade4a9b265e719a943d30a760f7bd177479e98162b457036438260c01368982",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220801164357-13911 -n ingress-addon-legacy-20220801164357-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220801164357-13911 -n ingress-addon-legacy-20220801164357-13911: exit status 6 (426.440167ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 16:51:12.729572   18502 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220801164357-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220801164357-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.50s)

                                                
                                    
x
+
TestPreload (264.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220801170316-13911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0801 17:04:02.133854   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 17:07:37.457965   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220801170316-13911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m21.45728851s)

                                                
                                                
-- stdout --
	* [test-preload-20220801170316-13911] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node test-preload-20220801170316-13911 in cluster test-preload-20220801170316-13911
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 17:03:16.812801   22222 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:03:16.812968   22222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:03:16.812973   22222 out.go:309] Setting ErrFile to fd 2...
	I0801 17:03:16.812979   22222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:03:16.813072   22222 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:03:16.813554   22222 out.go:303] Setting JSON to false
	I0801 17:03:16.828750   22222 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7367,"bootTime":1659391229,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:03:16.828832   22222 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:03:16.867266   22222 out.go:177] * [test-preload-20220801170316-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:03:16.909948   22222 notify.go:193] Checking for updates...
	I0801 17:03:16.930983   22222 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:03:16.951892   22222 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:03:16.973180   22222 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:03:16.995106   22222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:03:17.017097   22222 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:03:17.038412   22222 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:03:17.108726   22222 docker.go:137] docker version: linux-20.10.17
	I0801 17:03:17.108872   22222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:03:17.275640   22222 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-08-02 00:03:17.193779744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:03:17.317575   22222 out.go:177] * Using the docker driver based on user configuration
	I0801 17:03:17.338737   22222 start.go:284] selected driver: docker
	I0801 17:03:17.338768   22222 start.go:808] validating driver "docker" against <nil>
	I0801 17:03:17.338791   22222 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:03:17.342422   22222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:03:17.473550   22222 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-08-02 00:03:17.404951922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:03:17.473660   22222 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0801 17:03:17.473820   22222 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:03:17.495426   22222 out.go:177] * Using Docker Desktop driver with root privileges
	I0801 17:03:17.516205   22222 cni.go:95] Creating CNI manager for ""
	I0801 17:03:17.516239   22222 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:03:17.516305   22222 start_flags.go:310] config:
	{Name:test-preload-20220801170316-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220801170316-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:03:17.538318   22222 out.go:177] * Starting control plane node test-preload-20220801170316-13911 in cluster test-preload-20220801170316-13911
	I0801 17:03:17.580327   22222 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:03:17.602299   22222 out.go:177] * Pulling base image ...
	I0801 17:03:17.644354   22222 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:03:17.644356   22222 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0801 17:03:17.644725   22222 cache.go:107] acquiring lock: {Name:mkce27c207a7bf01881de4cf2e18a8ec061785d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.644724   22222 cache.go:107] acquiring lock: {Name:mk9b8bf4636842ab07289b1174f58101226f166a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.646582   22222 cache.go:107] acquiring lock: {Name:mk6f37f014cd0844e60dc9643585431560cd3d80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.646642   22222 cache.go:107] acquiring lock: {Name:mka3479b2f510428c39c6093977234d42b214ad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.646689   22222 cache.go:107] acquiring lock: {Name:mk7ff294c030949f09b0ef3f1f7cbeb672575114 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.646735   22222 cache.go:107] acquiring lock: {Name:mk0569c7a5d30a2a5f2230814452c47a1b6d60aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.646891   22222 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0801 17:03:17.647465   22222 cache.go:107] acquiring lock: {Name:mk819ea2706731d8610478ed0a8125fd6c47482e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.647307   22222 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.216913ms
	I0801 17:03:17.647794   22222 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0801 17:03:17.647788   22222 cache.go:107] acquiring lock: {Name:mkba52a26b53dacf9c42ae5f5e27822abfc55da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.647943   22222 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0801 17:03:17.647958   22222 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0801 17:03:17.648030   22222 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/config.json ...
	I0801 17:03:17.648081   22222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/config.json: {Name:mk145f94ffecacbe2060bd48b0d062e104ce0525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:03:17.648105   22222 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0801 17:03:17.648120   22222 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0801 17:03:17.648190   22222 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0801 17:03:17.648244   22222 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0801 17:03:17.648294   22222 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0801 17:03:17.654708   22222 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0801 17:03:17.655983   22222 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0801 17:03:17.656116   22222 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0801 17:03:17.656461   22222 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0801 17:03:17.657083   22222 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0801 17:03:17.657172   22222 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0801 17:03:17.657969   22222 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0801 17:03:17.713946   22222 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:03:17.713973   22222 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:03:17.713986   22222 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:03:17.714051   22222 start.go:371] acquiring machines lock for test-preload-20220801170316-13911: {Name:mkb6e8f25ce50fae4824407274e29d9edee7a4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:03:17.714196   22222 start.go:375] acquired machines lock for "test-preload-20220801170316-13911" in 132.846µs
	I0801 17:03:17.714221   22222 start.go:92] Provisioning new machine with config: &{Name:test-preload-20220801170316-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220801170316-13911 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:03:17.714335   22222 start.go:132] createHost starting for "" (driver="docker")
	I0801 17:03:17.756850   22222 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0801 17:03:17.757129   22222 start.go:166] libmachine.API.Create for "test-preload-20220801170316-13911" (driver="docker")
	I0801 17:03:17.757157   22222 client.go:168] LocalClient.Create starting
	I0801 17:03:17.757212   22222 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem
	I0801 17:03:17.757246   22222 main.go:134] libmachine: Decoding PEM data...
	I0801 17:03:17.757275   22222 main.go:134] libmachine: Parsing certificate...
	I0801 17:03:17.757332   22222 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem
	I0801 17:03:17.757356   22222 main.go:134] libmachine: Decoding PEM data...
	I0801 17:03:17.757368   22222 main.go:134] libmachine: Parsing certificate...
	I0801 17:03:17.757851   22222 cli_runner.go:164] Run: docker network inspect test-preload-20220801170316-13911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0801 17:03:17.821235   22222 cli_runner.go:211] docker network inspect test-preload-20220801170316-13911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0801 17:03:17.821311   22222 network_create.go:272] running [docker network inspect test-preload-20220801170316-13911] to gather additional debugging logs...
	I0801 17:03:17.821325   22222 cli_runner.go:164] Run: docker network inspect test-preload-20220801170316-13911
	W0801 17:03:17.884596   22222 cli_runner.go:211] docker network inspect test-preload-20220801170316-13911 returned with exit code 1
	I0801 17:03:17.884614   22222 network_create.go:275] error running [docker network inspect test-preload-20220801170316-13911]: docker network inspect test-preload-20220801170316-13911: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220801170316-13911
	I0801 17:03:17.884629   22222 network_create.go:277] output of [docker network inspect test-preload-20220801170316-13911]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220801170316-13911
	
	** /stderr **
	I0801 17:03:17.884685   22222 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0801 17:03:17.947986   22222 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006e24c8] misses:0}
	I0801 17:03:17.948025   22222 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:03:17.948044   22222 network_create.go:115] attempt to create docker network test-preload-20220801170316-13911 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0801 17:03:17.948115   22222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220801170316-13911 test-preload-20220801170316-13911
	W0801 17:03:18.011039   22222 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220801170316-13911 test-preload-20220801170316-13911 returned with exit code 1
	W0801 17:03:18.011072   22222 network_create.go:107] failed to create docker network test-preload-20220801170316-13911 192.168.49.0/24, will retry: subnet is taken
	I0801 17:03:18.011301   22222 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006e24c8] amended:false}} dirty:map[] misses:0}
	I0801 17:03:18.011317   22222 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:03:18.011523   22222 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006e24c8] amended:true}} dirty:map[192.168.49.0:0xc0006e24c8 192.168.58.0:0xc0003da6b8] misses:0}
	I0801 17:03:18.011536   22222 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:03:18.011543   22222 network_create.go:115] attempt to create docker network test-preload-20220801170316-13911 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0801 17:03:18.011595   22222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220801170316-13911 test-preload-20220801170316-13911
	W0801 17:03:18.073555   22222 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220801170316-13911 test-preload-20220801170316-13911 returned with exit code 1
	W0801 17:03:18.073583   22222 network_create.go:107] failed to create docker network test-preload-20220801170316-13911 192.168.58.0/24, will retry: subnet is taken
	I0801 17:03:18.073841   22222 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006e24c8] amended:true}} dirty:map[192.168.49.0:0xc0006e24c8 192.168.58.0:0xc0003da6b8] misses:1}
	I0801 17:03:18.073871   22222 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:03:18.074065   22222 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006e24c8] amended:true}} dirty:map[192.168.49.0:0xc0006e24c8 192.168.58.0:0xc0003da6b8 192.168.67.0:0xc00012f7c0] misses:1}
	I0801 17:03:18.074081   22222 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:03:18.074089   22222 network_create.go:115] attempt to create docker network test-preload-20220801170316-13911 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0801 17:03:18.074162   22222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220801170316-13911 test-preload-20220801170316-13911
	I0801 17:03:18.167103   22222 network_create.go:99] docker network test-preload-20220801170316-13911 192.168.67.0/24 created
	I0801 17:03:18.167128   22222 kic.go:106] calculated static IP "192.168.67.2" for the "test-preload-20220801170316-13911" container
	I0801 17:03:18.167208   22222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0801 17:03:18.229782   22222 cli_runner.go:164] Run: docker volume create test-preload-20220801170316-13911 --label name.minikube.sigs.k8s.io=test-preload-20220801170316-13911 --label created_by.minikube.sigs.k8s.io=true
	I0801 17:03:18.292679   22222 oci.go:103] Successfully created a docker volume test-preload-20220801170316-13911
	I0801 17:03:18.292755   22222 cli_runner.go:164] Run: docker run --rm --name test-preload-20220801170316-13911-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220801170316-13911 --entrypoint /usr/bin/test -v test-preload-20220801170316-13911:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib
	I0801 17:03:18.355903   22222 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0801 17:03:18.356085   22222 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0801 17:03:18.357062   22222 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0801 17:03:18.357364   22222 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0801 17:03:18.387404   22222 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0801 17:03:18.475685   22222 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0801 17:03:18.475703   22222 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 829.430609ms
	I0801 17:03:18.475712   22222 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0801 17:03:18.505537   22222 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0801 17:03:18.578130   22222 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0801 17:03:18.733130   22222 oci.go:107] Successfully prepared a docker volume test-preload-20220801170316-13911
	I0801 17:03:18.733159   22222 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0801 17:03:18.733234   22222 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0801 17:03:18.867876   22222 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220801170316-13911 --name test-preload-20220801170316-13911 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220801170316-13911 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220801170316-13911 --network test-preload-20220801170316-13911 --ip 192.168.67.2 --volume test-preload-20220801170316-13911:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8
	I0801 17:03:19.269224   22222 cli_runner.go:164] Run: docker container inspect test-preload-20220801170316-13911 --format={{.State.Running}}
	I0801 17:03:19.344264   22222 cli_runner.go:164] Run: docker container inspect test-preload-20220801170316-13911 --format={{.State.Status}}
	I0801 17:03:19.421980   22222 cli_runner.go:164] Run: docker exec test-preload-20220801170316-13911 stat /var/lib/dpkg/alternatives/iptables
	I0801 17:03:19.556918   22222 oci.go:144] the created container "test-preload-20220801170316-13911" has a running status.
	I0801 17:03:19.556947   22222 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/test-preload-20220801170316-13911/id_rsa...
	I0801 17:03:19.587312   22222 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0801 17:03:19.587327   22222 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 1.94089517s
	I0801 17:03:19.587337   22222 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0801 17:03:19.681030   22222 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/test-preload-20220801170316-13911/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0801 17:03:19.796140   22222 cli_runner.go:164] Run: docker container inspect test-preload-20220801170316-13911 --format={{.State.Status}}
	I0801 17:03:19.864863   22222 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0801 17:03:19.864880   22222 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220801170316-13911 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0801 17:03:19.984279   22222 cli_runner.go:164] Run: docker container inspect test-preload-20220801170316-13911 --format={{.State.Status}}
	I0801 17:03:20.054709   22222 machine.go:88] provisioning docker machine ...
	I0801 17:03:20.054751   22222 ubuntu.go:169] provisioning hostname "test-preload-20220801170316-13911"
	I0801 17:03:20.054856   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:20.123832   22222 main.go:134] libmachine: Using SSH client type: native
	I0801 17:03:20.124038   22222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60982 <nil> <nil>}
	I0801 17:03:20.124058   22222 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220801170316-13911 && echo "test-preload-20220801170316-13911" | sudo tee /etc/hostname
	I0801 17:03:20.245870   22222 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220801170316-13911
	
	I0801 17:03:20.245957   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:20.290812   22222 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0801 17:03:20.290849   22222 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 2.64427764s
	I0801 17:03:20.290867   22222 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0801 17:03:20.315553   22222 main.go:134] libmachine: Using SSH client type: native
	I0801 17:03:20.315709   22222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60982 <nil> <nil>}
	I0801 17:03:20.315725   22222 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220801170316-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220801170316-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220801170316-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:03:20.425764   22222 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:03:20.425788   22222 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:03:20.425812   22222 ubuntu.go:177] setting up certificates
	I0801 17:03:20.425820   22222 provision.go:83] configureAuth start
	I0801 17:03:20.425890   22222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220801170316-13911
	I0801 17:03:20.495966   22222 provision.go:138] copyHostCerts
	I0801 17:03:20.496042   22222 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:03:20.496052   22222 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:03:20.496147   22222 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:03:20.496331   22222 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:03:20.496351   22222 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:03:20.496420   22222 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:03:20.496562   22222 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:03:20.496568   22222 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:03:20.496634   22222 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:03:20.496755   22222 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220801170316-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220801170316-13911]
	I0801 17:03:20.657739   22222 provision.go:172] copyRemoteCerts
	I0801 17:03:20.657804   22222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:03:20.657859   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:20.692962   22222 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0801 17:03:20.692986   22222 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 3.048244528s
	I0801 17:03:20.692998   22222 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0801 17:03:20.726370   22222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60982 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/test-preload-20220801170316-13911/id_rsa Username:docker}
	I0801 17:03:20.726406   22222 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0801 17:03:20.726424   22222 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 3.079766852s
	I0801 17:03:20.726432   22222 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0801 17:03:20.809486   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0801 17:03:20.827867   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 17:03:20.845436   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:03:20.863595   22222 provision.go:86] duration metric: configureAuth took 437.758072ms
	I0801 17:03:20.863607   22222 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:03:20.863739   22222 config.go:180] Loaded profile config "test-preload-20220801170316-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0801 17:03:20.863796   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:20.935553   22222 main.go:134] libmachine: Using SSH client type: native
	I0801 17:03:20.935692   22222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60982 <nil> <nil>}
	I0801 17:03:20.935706   22222 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:03:21.048910   22222 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:03:21.048922   22222 ubuntu.go:71] root file system type: overlay
	I0801 17:03:21.049049   22222 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:03:21.049124   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:21.120977   22222 main.go:134] libmachine: Using SSH client type: native
	I0801 17:03:21.121139   22222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60982 <nil> <nil>}
	I0801 17:03:21.121185   22222 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:03:21.240070   22222 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0801 17:03:21.240090   22222 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 3.593784126s
	I0801 17:03:21.240122   22222 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0801 17:03:21.241180   22222 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:03:21.241251   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:21.309419   22222 main.go:134] libmachine: Using SSH client type: native
	I0801 17:03:21.309578   22222 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60982 <nil> <nil>}
	I0801 17:03:21.309597   22222 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:03:21.329952   22222 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0801 17:03:21.329971   22222 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 3.683664597s
	I0801 17:03:21.329986   22222 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0801 17:03:21.330005   22222 cache.go:87] Successfully saved all images to host disk.
	I0801 17:03:21.913564   22222 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:03:21.247267726 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0801 17:03:21.913586   22222 machine.go:91] provisioned docker machine in 1.858831068s
	I0801 17:03:21.913593   22222 client.go:171] LocalClient.Create took 4.15637562s
	I0801 17:03:21.913612   22222 start.go:174] duration metric: libmachine.API.Create for "test-preload-20220801170316-13911" took 4.156423353s
	I0801 17:03:21.913621   22222 start.go:307] post-start starting for "test-preload-20220801170316-13911" (driver="docker")
	I0801 17:03:21.913626   22222 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:03:21.913695   22222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:03:21.913744   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:21.981178   22222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60982 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/test-preload-20220801170316-13911/id_rsa Username:docker}
	I0801 17:03:22.065520   22222 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:03:22.069005   22222 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:03:22.069020   22222 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:03:22.069027   22222 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:03:22.069033   22222 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:03:22.069042   22222 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:03:22.069145   22222 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:03:22.069280   22222 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:03:22.069427   22222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:03:22.076373   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:03:22.093102   22222 start.go:310] post-start completed in 179.469735ms
	I0801 17:03:22.093749   22222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220801170316-13911
	I0801 17:03:22.161058   22222 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/config.json ...
	I0801 17:03:22.161451   22222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:03:22.161500   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:22.229016   22222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60982 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/test-preload-20220801170316-13911/id_rsa Username:docker}
	I0801 17:03:22.309970   22222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:03:22.314188   22222 start.go:135] duration metric: createHost completed in 4.599782281s
	I0801 17:03:22.314203   22222 start.go:82] releasing machines lock for "test-preload-20220801170316-13911", held for 4.599935129s
	I0801 17:03:22.314271   22222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220801170316-13911
	I0801 17:03:22.381352   22222 ssh_runner.go:195] Run: systemctl --version
	I0801 17:03:22.381365   22222 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:03:22.381437   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:22.381444   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:22.452995   22222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60982 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/test-preload-20220801170316-13911/id_rsa Username:docker}
	I0801 17:03:22.453529   22222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60982 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/test-preload-20220801170316-13911/id_rsa Username:docker}
	I0801 17:03:22.535683   22222 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:03:22.726337   22222 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:03:22.726412   22222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:03:22.736032   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:03:22.748802   22222 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:03:22.817796   22222 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:03:22.888514   22222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:03:22.949010   22222 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:03:23.146065   22222 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:03:23.182702   22222 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:03:23.265252   22222 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	I0801 17:03:23.265415   22222 cli_runner.go:164] Run: docker exec -t test-preload-20220801170316-13911 dig +short host.docker.internal
	I0801 17:03:23.389407   22222 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:03:23.389496   22222 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:03:23.393832   22222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:03:23.403109   22222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220801170316-13911
	I0801 17:03:23.470531   22222 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0801 17:03:23.470588   22222 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:03:23.498219   22222 docker.go:611] Got preloaded images: 
	I0801 17:03:23.498265   22222 docker.go:617] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0801 17:03:23.498273   22222 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0801 17:03:23.506213   22222 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:03:23.506543   22222 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0801 17:03:23.506959   22222 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0801 17:03:23.507164   22222 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0801 17:03:23.507705   22222 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0801 17:03:23.508022   22222 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0801 17:03:23.508395   22222 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0801 17:03:23.511282   22222 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0801 17:03:23.512639   22222 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0801 17:03:23.514117   22222 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0801 17:03:23.514758   22222 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:03:23.515198   22222 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0801 17:03:23.515354   22222 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0801 17:03:23.515637   22222 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0801 17:03:23.515884   22222 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0801 17:03:23.516419   22222 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0801 17:03:24.080315   22222 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0801 17:03:24.110369   22222 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0801 17:03:24.110409   22222 docker.go:292] Removing image: k8s.gcr.io/coredns:1.6.5
	I0801 17:03:24.110464   22222 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0801 17:03:24.131990   22222 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0801 17:03:24.139171   22222 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0801 17:03:24.139301   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0801 17:03:24.161794   22222 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0801 17:03:24.161806   22222 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0801 17:03:24.161818   22222 docker.go:292] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0801 17:03:24.161838   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0801 17:03:24.161861   22222 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0801 17:03:24.205750   22222 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0801 17:03:24.205880   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0801 17:03:24.207517   22222 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0801 17:03:24.243467   22222 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0801 17:03:24.243509   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0801 17:03:24.257045   22222 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0801 17:03:24.274409   22222 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0801 17:03:24.283884   22222 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0801 17:03:24.283932   22222 docker.go:292] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0801 17:03:24.283988   22222 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0801 17:03:24.317019   22222 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:03:24.343184   22222 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0801 17:03:24.344829   22222 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0801 17:03:24.363976   22222 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0801 17:03:24.364018   22222 docker.go:292] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0801 17:03:24.364055   22222 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0801 17:03:24.364107   22222 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0801 17:03:24.364120   22222 docker.go:292] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0801 17:03:24.364190   22222 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0801 17:03:24.385085   22222 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0801 17:03:24.385241   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0801 17:03:24.404732   22222 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0801 17:03:24.404761   22222 docker.go:292] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:03:24.404825   22222 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:03:24.471155   22222 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0801 17:03:24.471185   22222 docker.go:292] Removing image: k8s.gcr.io/pause:3.1
	I0801 17:03:24.471192   22222 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0801 17:03:24.471212   22222 docker.go:292] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0801 17:03:24.471259   22222 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0801 17:03:24.471260   22222 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0801 17:03:24.489980   22222 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0801 17:03:24.490000   22222 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0801 17:03:24.490053   22222 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0801 17:03:24.490087   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0801 17:03:24.490134   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0801 17:03:24.490136   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0801 17:03:24.498690   22222 docker.go:259] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0801 17:03:24.498705   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0801 17:03:24.512291   22222 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0801 17:03:24.512462   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0801 17:03:24.596220   22222 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0801 17:03:24.596220   22222 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0801 17:03:24.596363   22222 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0801 17:03:24.596392   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0801 17:03:24.596396   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0801 17:03:24.596408   22222 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0801 17:03:24.596429   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0801 17:03:24.596437   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0801 17:03:25.363088   22222 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0801 17:03:25.363125   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0801 17:03:25.363127   22222 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0801 17:03:25.363149   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0801 17:03:25.363172   22222 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0801 17:03:25.363192   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0801 17:03:25.368903   22222 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0801 17:03:25.519231   22222 docker.go:259] Loading image: /var/lib/minikube/images/pause_3.1
	I0801 17:03:25.519251   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0801 17:03:25.767356   22222 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0801 17:03:26.330665   22222 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0801 17:03:26.330697   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0801 17:03:26.938196   22222 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0801 17:03:27.342192   22222 docker.go:259] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0801 17:03:27.342210   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0801 17:03:29.426346   22222 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load": (2.084093917s)
	I0801 17:03:29.426360   22222 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0801 17:03:29.426383   22222 docker.go:259] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0801 17:03:29.426394   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0801 17:03:30.332537   22222 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0801 17:03:30.332560   22222 docker.go:259] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0801 17:03:30.332571   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0801 17:03:31.221970   22222 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0801 17:03:31.222002   22222 docker.go:259] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0801 17:03:31.222013   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0801 17:03:32.310424   22222 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.08838267s)
	I0801 17:03:32.310438   22222 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0801 17:03:32.310461   22222 docker.go:259] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0801 17:03:32.310469   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0801 17:03:35.192371   22222 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (2.881850583s)
	I0801 17:03:35.192385   22222 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0801 17:03:35.192409   22222 cache_images.go:123] Successfully loaded all cached images
	I0801 17:03:35.192413   22222 cache_images.go:92] LoadImages completed in 11.693970522s
	I0801 17:03:35.192480   22222 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:03:35.264159   22222 cni.go:95] Creating CNI manager for ""
	I0801 17:03:35.264171   22222 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:03:35.264182   22222 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:03:35.264195   22222 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220801170316-13911 NodeName:test-preload-20220801170316-13911 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:03:35.264289   22222 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220801170316-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:03:35.264361   22222 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220801170316-13911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220801170316-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:03:35.264418   22222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0801 17:03:35.271871   22222 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0801 17:03:35.271916   22222 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0801 17:03:35.279417   22222 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0801 17:03:35.279418   22222 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0801 17:03:35.279418   22222 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0801 17:03:36.285690   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0801 17:03:36.291471   22222 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0801 17:03:36.291495   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0801 17:03:37.484621   22222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:03:37.494659   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0801 17:03:37.498840   22222 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0801 17:03:37.498866   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0801 17:03:37.617495   22222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0801 17:03:37.681106   22222 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0801 17:03:37.681138   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0801 17:03:39.916633   22222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:03:39.923564   22222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0801 17:03:39.935914   22222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:03:39.948023   22222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0801 17:03:39.960222   22222 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:03:39.963824   22222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:03:39.973633   22222 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911 for IP: 192.168.67.2
	I0801 17:03:39.973732   22222 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:03:39.973783   22222 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:03:39.973822   22222 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/client.key
	I0801 17:03:39.973834   22222 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/client.crt with IP's: []
	I0801 17:03:40.051940   22222 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/client.crt ...
	I0801 17:03:40.051952   22222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/client.crt: {Name:mkea6f7567cd794fa9d90a20bd11475fd7d05cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:03:40.052267   22222 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/client.key ...
	I0801 17:03:40.052276   22222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/client.key: {Name:mk2b877b10560e34e31683e72828567d71012ffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:03:40.052490   22222 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.key.c7fa3a9e
	I0801 17:03:40.052508   22222 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0801 17:03:40.375423   22222 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.crt.c7fa3a9e ...
	I0801 17:03:40.375436   22222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.crt.c7fa3a9e: {Name:mk1aa2cf3627a00991e1c03c0a3835ff563fd92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:03:40.375674   22222 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.key.c7fa3a9e ...
	I0801 17:03:40.375699   22222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.key.c7fa3a9e: {Name:mk6cb6ad89d6e1debaa406db7ae60a614d67db6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:03:40.375891   22222 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.crt
	I0801 17:03:40.376043   22222 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.key
	I0801 17:03:40.376188   22222 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/proxy-client.key
	I0801 17:03:40.376203   22222 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/proxy-client.crt with IP's: []
	I0801 17:03:40.462336   22222 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/proxy-client.crt ...
	I0801 17:03:40.462345   22222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/proxy-client.crt: {Name:mka1213bdf1f9590c6bf541340ba230c31316462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:03:40.462564   22222 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/proxy-client.key ...
	I0801 17:03:40.462571   22222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/proxy-client.key: {Name:mk71c6b28ef134cfa2ee0a43a35a0ec68e23702e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:03:40.462923   22222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:03:40.462968   22222 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:03:40.462978   22222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:03:40.463007   22222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:03:40.463036   22222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:03:40.463070   22222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:03:40.463142   22222 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:03:40.463624   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:03:40.481750   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:03:40.498718   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:03:40.515365   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/test-preload-20220801170316-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:03:40.532470   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:03:40.548565   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:03:40.564624   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:03:40.580636   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:03:40.596618   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:03:40.613398   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:03:40.629580   22222 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:03:40.645666   22222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:03:40.658454   22222 ssh_runner.go:195] Run: openssl version
	I0801 17:03:40.663571   22222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:03:40.670988   22222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:03:40.674861   22222 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:03:40.674905   22222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:03:40.679800   22222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:03:40.687283   22222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:03:40.694930   22222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:03:40.698606   22222 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:03:40.698646   22222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:03:40.703775   22222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:03:40.711284   22222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:03:40.718914   22222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:03:40.723018   22222 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:03:40.723073   22222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:03:40.728215   22222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:03:40.735758   22222 kubeadm.go:395] StartCluster: {Name:test-preload-20220801170316-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220801170316-13911 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:03:40.735851   22222 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:03:40.764777   22222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:03:40.772870   22222 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:03:40.780658   22222 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:03:40.780702   22222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:03:40.789073   22222 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:03:40.789105   22222 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:03:41.509178   22222 out.go:204]   - Generating certificates and keys ...
	I0801 17:03:43.832163   22222 out.go:204]   - Booting up control plane ...
	W0801 17:05:38.772240   22222 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220801170316-13911 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220801170316-13911 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0802 00:03:40.835892    1569 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0802 00:03:40.835949    1569 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0802 00:03:43.818642    1569 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0802 00:03:43.819415    1569 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220801170316-13911 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220801170316-13911 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0802 00:03:40.835892    1569 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0802 00:03:40.835949    1569 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0802 00:03:43.818642    1569 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0802 00:03:43.819415    1569 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 17:05:38.772275   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 17:05:39.201816   22222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:05:39.210815   22222 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:05:39.210862   22222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:05:39.218021   22222 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:05:39.218047   22222 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:05:39.916044   22222 out.go:204]   - Generating certificates and keys ...
	I0801 17:05:40.707272   22222 out.go:204]   - Booting up control plane ...
	I0801 17:07:35.627808   22222 kubeadm.go:397] StartCluster complete in 3m54.867624655s
	I0801 17:07:35.627884   22222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:07:35.655919   22222 logs.go:274] 0 containers: []
	W0801 17:07:35.655931   22222 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:07:35.655987   22222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:07:35.685499   22222 logs.go:274] 0 containers: []
	W0801 17:07:35.685512   22222 logs.go:276] No container was found matching "etcd"
	I0801 17:07:35.685570   22222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:07:35.713842   22222 logs.go:274] 0 containers: []
	W0801 17:07:35.713854   22222 logs.go:276] No container was found matching "coredns"
	I0801 17:07:35.713934   22222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:07:35.742937   22222 logs.go:274] 0 containers: []
	W0801 17:07:35.742950   22222 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:07:35.743008   22222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:07:35.772591   22222 logs.go:274] 0 containers: []
	W0801 17:07:35.772605   22222 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:07:35.772663   22222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:07:35.802268   22222 logs.go:274] 0 containers: []
	W0801 17:07:35.802281   22222 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:07:35.802349   22222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:07:35.832141   22222 logs.go:274] 0 containers: []
	W0801 17:07:35.832155   22222 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:07:35.832218   22222 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:07:35.861343   22222 logs.go:274] 0 containers: []
	W0801 17:07:35.861355   22222 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:07:35.861367   22222 logs.go:123] Gathering logs for kubelet ...
	I0801 17:07:35.861375   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:07:35.899369   22222 logs.go:123] Gathering logs for dmesg ...
	I0801 17:07:35.899383   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:07:35.912611   22222 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:07:35.912626   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:07:35.964008   22222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:07:35.964022   22222 logs.go:123] Gathering logs for Docker ...
	I0801 17:07:35.964028   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:07:35.978657   22222 logs.go:123] Gathering logs for container status ...
	I0801 17:07:35.978670   22222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:07:38.032450   22222 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05374249s)
	W0801 17:07:38.032570   22222 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0802 00:05:39.261483    3847 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0802 00:05:39.261533    3847 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0802 00:05:40.691337    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0802 00:05:40.692150    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 17:07:38.032587   22222 out.go:239] * 
	* 
	W0801 17:07:38.032716   22222 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0802 00:05:39.261483    3847 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0802 00:05:39.261533    3847 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0802 00:05:40.691337    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0802 00:05:40.692150    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0802 00:05:39.261483    3847 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0802 00:05:39.261533    3847 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0802 00:05:40.691337    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0802 00:05:40.692150    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:07:38.032730   22222 out.go:239] * 
	* 
	W0801 17:07:38.033258   22222 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 17:07:38.096904   22222 out.go:177] 
	W0801 17:07:38.139224   22222 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0802 00:05:39.261483    3847 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0802 00:05:39.261533    3847 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0802 00:05:40.691337    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0802 00:05:40.692150    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0802 00:05:39.261483    3847 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0802 00:05:39.261533    3847 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0802 00:05:40.691337    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0802 00:05:40.692150    3847 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:07:38.139388   22222 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 17:07:38.139455   22222 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 17:07:38.182005   22222 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220801170316-13911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-08-01 17:07:38.282551 -0700 PDT m=+2021.650256602
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220801170316-13911
helpers_test.go:235: (dbg) docker inspect test-preload-20220801170316-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92a499ad2e0291034a0e306c8151c6767de51d4c23b51d8e1b45889063202a28",
	        "Created": "2022-08-02T00:03:18.940692688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 105664,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:03:19.266497558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/92a499ad2e0291034a0e306c8151c6767de51d4c23b51d8e1b45889063202a28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92a499ad2e0291034a0e306c8151c6767de51d4c23b51d8e1b45889063202a28/hostname",
	        "HostsPath": "/var/lib/docker/containers/92a499ad2e0291034a0e306c8151c6767de51d4c23b51d8e1b45889063202a28/hosts",
	        "LogPath": "/var/lib/docker/containers/92a499ad2e0291034a0e306c8151c6767de51d4c23b51d8e1b45889063202a28/92a499ad2e0291034a0e306c8151c6767de51d4c23b51d8e1b45889063202a28-json.log",
	        "Name": "/test-preload-20220801170316-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220801170316-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220801170316-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69b1701f63941ef2653e1983ca09898eba8f3c20641c6ca29c052e3a9b241589-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69b1701f63941ef2653e1983ca09898eba8f3c20641c6ca29c052e3a9b241589/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69b1701f63941ef2653e1983ca09898eba8f3c20641c6ca29c052e3a9b241589/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69b1701f63941ef2653e1983ca09898eba8f3c20641c6ca29c052e3a9b241589/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220801170316-13911",
	                "Source": "/var/lib/docker/volumes/test-preload-20220801170316-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220801170316-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220801170316-13911",
	                "name.minikube.sigs.k8s.io": "test-preload-20220801170316-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99468e06c642988bf82308612f57c2162b7d2ee4f98acfe5d8b95383627d1814",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60982"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60978"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60979"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60981"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/99468e06c642",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220801170316-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "92a499ad2e02",
	                        "test-preload-20220801170316-13911"
	                    ],
	                    "NetworkID": "a8a9f00326196fd56edd62ff039dcf9c172775d98b2aa24e86000cb8dc8633c0",
	                    "EndpointID": "c6b24422ba6a0d8a592b80af498b52ffc01c0646244f42e6a752a9d9b5c1129a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220801170316-13911 -n test-preload-20220801170316-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220801170316-13911 -n test-preload-20220801170316-13911: exit status 6 (441.142033ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:07:38.783850   22754 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220801170316-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220801170316-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220801170316-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220801170316-13911
E0801 17:07:39.094057   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220801170316-13911: (2.550079009s)
--- FAIL: TestPreload (264.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1687144139.exe start -p running-upgrade-20220801171242-13911 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1687144139.exe start -p running-upgrade-20220801171242-13911 --memory=2200 --vm-driver=docker : exit status 70 (54.157366471s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220801171242-13911] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1948381047
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:13:19.079839617 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220801171242-13911" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:13:35.233840591 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220801171242-13911", then "minikube start -p running-upgrade-20220801171242-13911 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.26.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.88 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 55.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 151.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 237.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 357.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 374.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 389.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 460.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 476.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 538.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:13:35.233840591 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1687144139.exe start -p running-upgrade-20220801171242-13911 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1687144139.exe start -p running-upgrade-20220801171242-13911 --memory=2200 --vm-driver=docker : exit status 70 (4.559417093s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220801171242-13911] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1085640478
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220801171242-13911" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1687144139.exe start -p running-upgrade-20220801171242-13911 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.1687144139.exe start -p running-upgrade-20220801171242-13911 --memory=2200 --vm-driver=docker : exit status 70 (4.466913861s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220801171242-13911] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig2262071638
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220801171242-13911" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-08-01 17:13:48.93398 -0700 PDT m=+2392.297460420
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220801171242-13911
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220801171242-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bad084f571b83222ab465f20cb0a85a248ff5537ee49912167401489c5de59da",
	        "Created": "2022-08-02T00:13:27.329220086Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 140751,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:13:27.566132107Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/bad084f571b83222ab465f20cb0a85a248ff5537ee49912167401489c5de59da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bad084f571b83222ab465f20cb0a85a248ff5537ee49912167401489c5de59da/hostname",
	        "HostsPath": "/var/lib/docker/containers/bad084f571b83222ab465f20cb0a85a248ff5537ee49912167401489c5de59da/hosts",
	        "LogPath": "/var/lib/docker/containers/bad084f571b83222ab465f20cb0a85a248ff5537ee49912167401489c5de59da/bad084f571b83222ab465f20cb0a85a248ff5537ee49912167401489c5de59da-json.log",
	        "Name": "/running-upgrade-20220801171242-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220801171242-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7ef4e2e6fa399740e2bcd9711ad4b2b2a4bc6d875d2187827c71cf8a6bc25191-init/diff:/var/lib/docker/overlay2/0400cad1b1313cb67e65ebe434b7dd2b29211488ca1be949b79ab4d6a79eb083/diff:/var/lib/docker/overlay2/ed57a0f5f3e1a9318836ec67c8928bcd3c5cb6dc101c50ea25c3dbe9f66b420b/diff:/var/lib/docker/overlay2/d7e41b730acd6ed99af00219d5b49e285810e9ee8617372d4281ac10e21c25e4/diff:/var/lib/docker/overlay2/dc52ef1dcecdf09e164c1863b8dd957e76c92af279dca514512910a777c5ca02/diff:/var/lib/docker/overlay2/cc5049b75b18bac615647f9185e16a39d5a4284077872e5ee4d92dc5a201dad2/diff:/var/lib/docker/overlay2/2566f17d12919bb1dbec910b8ad2bd988a5969b0f7790994fa7ae09b6921dd1b/diff:/var/lib/docker/overlay2/eeb11926bcaf873915458588cc325813b67bc777447d971da22180f1e3faf30c/diff:/var/lib/docker/overlay2/9d42d7c19475b99aa2669f464549b9a142ae2a0ff9a246164abe50e634e98e42/diff:/var/lib/docker/overlay2/5f303196a99ad4a9cae12fb0d21eb8b720994e95533de360b3547dcd7196f01f/diff:/var/lib/docker/overlay2/0ae627
cf2b88ab743a72e1cdd36956b6ac9f3997fae85c34d5713dad9f01dc84/diff:/var/lib/docker/overlay2/e058fad03b36217915773f8ee0df03b8bce92d9a4ead373f8240d8d771572bca/diff:/var/lib/docker/overlay2/6943f35823dec04a8285e8caebd892e09fac68a639bbbacd138e37fd68f0129a/diff:/var/lib/docker/overlay2/d0cc6ebebf4926de68319cedd869e1fc445bf1d364b3b0e35c1e830fe0fe48b4/diff:/var/lib/docker/overlay2/4472e24cfebff93d1e85b6e4d68ff625173c0e3152679abc20700fc92a14b1d1/diff:/var/lib/docker/overlay2/0e6a6441f8d09a9b42dc66b0c1b96324b926db60b70f4887003265eb438ac79d/diff:/var/lib/docker/overlay2/96d290e13d0c5ed9e67442baa879e92e1cdc28880b1d383e731225f02d8f07cd/diff:/var/lib/docker/overlay2/289ef8b1cad82c3009a902132283b644e1498ffcfeadcb259a4a204a83cf3cfd/diff:/var/lib/docker/overlay2/a088d2ff3331391b344eb7c1c616e95b1b8f68c5eaae24166ed26e85752c0464/diff:/var/lib/docker/overlay2/7baccffb45621ad4622b3a2c014a57d4ce16dda8dc7b6f3f11d9821cb964e5aa/diff:/var/lib/docker/overlay2/6cf270cd2e69e14e024959ad818ca7a94272885dc5bbf442baa824ecce417692/diff:/var/lib/d
ocker/overlay2/b2c09f536dfd40bc8116f84562c044148380c7873818bdd91cd50876633f28cd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7ef4e2e6fa399740e2bcd9711ad4b2b2a4bc6d875d2187827c71cf8a6bc25191/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7ef4e2e6fa399740e2bcd9711ad4b2b2a4bc6d875d2187827c71cf8a6bc25191/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7ef4e2e6fa399740e2bcd9711ad4b2b2a4bc6d875d2187827c71cf8a6bc25191/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220801171242-13911",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220801171242-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220801171242-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220801171242-13911",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220801171242-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "348dad8f40f0b0e2e45bb2dea576f19f780a5abd198ffe4687cac8cde9505630",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62835"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62836"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62837"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/348dad8f40f0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "927144f1f81d6f4f8b341697ef2f30bbf833270ca19c4df08a4c7de9d6a50a5f",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "36a75cfd26e0b119898f9567916858c4590125414a58496657a7667bf2804204",
	                    "EndpointID": "927144f1f81d6f4f8b341697ef2f30bbf833270ca19c4df08a4c7de9d6a50a5f",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220801171242-13911 -n running-upgrade-20220801171242-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220801171242-13911 -n running-upgrade-20220801171242-13911: exit status 6 (429.789944ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:13:49.427914   24850 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220801171242-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220801171242-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220801171242-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220801171242-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220801171242-13911: (2.420312081s)
--- FAIL: TestRunningBinaryUpgrade (69.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (560.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0801 17:15:11.914295   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:11.920025   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:11.930704   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:11.952429   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:11.992679   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:12.074944   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:12.235105   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:12.595445   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:13.236189   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:14.516941   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:17.078484   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:22.199892   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m13.933265416s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220801171441-13911] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-20220801171441-13911 in cluster kubernetes-upgrade-20220801171441-13911
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 17:14:41.999307   25219 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:14:41.999469   25219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:14:41.999475   25219 out.go:309] Setting ErrFile to fd 2...
	I0801 17:14:41.999479   25219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:14:41.999577   25219 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:14:42.000075   25219 out.go:303] Setting JSON to false
	I0801 17:14:42.014920   25219 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8053,"bootTime":1659391229,"procs":385,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:14:42.014998   25219 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:14:42.036705   25219 out.go:177] * [kubernetes-upgrade-20220801171441-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:14:42.079903   25219 notify.go:193] Checking for updates...
	I0801 17:14:42.101412   25219 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:14:42.122706   25219 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:14:42.143996   25219 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:14:42.166230   25219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:14:42.187826   25219 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:14:42.210800   25219 config.go:180] Loaded profile config "cert-expiration-20220801171201-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:14:42.210891   25219 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:14:42.280140   25219 docker.go:137] docker version: linux-20.10.17
	I0801 17:14:42.280274   25219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:14:42.413450   25219 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:14:42.340906125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:14:42.456887   25219 out.go:177] * Using the docker driver based on user configuration
	I0801 17:14:42.478058   25219 start.go:284] selected driver: docker
	I0801 17:14:42.478132   25219 start.go:808] validating driver "docker" against <nil>
	I0801 17:14:42.478199   25219 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:14:42.481737   25219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:14:42.614378   25219 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:14:42.541863293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:14:42.614513   25219 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0801 17:14:42.614656   25219 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0801 17:14:42.636114   25219 out.go:177] * Using Docker Desktop driver with root privileges
	I0801 17:14:42.657342   25219 cni.go:95] Creating CNI manager for ""
	I0801 17:14:42.657372   25219 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:14:42.657388   25219 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220801171441-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220801171441-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:14:42.679412   25219 out.go:177] * Starting control plane node kubernetes-upgrade-20220801171441-13911 in cluster kubernetes-upgrade-20220801171441-13911
	I0801 17:14:42.721124   25219 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:14:42.742220   25219 out.go:177] * Pulling base image ...
	I0801 17:14:42.783986   25219 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:14:42.784028   25219 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:14:42.847474   25219 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:14:42.847496   25219 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:14:42.873935   25219 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0801 17:14:42.873960   25219 cache.go:57] Caching tarball of preloaded images
	I0801 17:14:42.875485   25219 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:14:42.919176   25219 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0801 17:14:42.940223   25219 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0801 17:14:43.032394   25219 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0801 17:14:47.035510   25219 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0801 17:14:47.035655   25219 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0801 17:14:47.584870   25219 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0801 17:14:47.584954   25219 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/config.json ...
	I0801 17:14:47.584975   25219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/config.json: {Name:mk705abd84d40b5c35af57f58b73af0d1ae517c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:14:47.585260   25219 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:14:47.585291   25219 start.go:371] acquiring machines lock for kubernetes-upgrade-20220801171441-13911: {Name:mkbf1520452a5fe9d7e151681aae23add69731c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:14:47.585383   25219 start.go:375] acquired machines lock for "kubernetes-upgrade-20220801171441-13911" in 84.538µs
	I0801 17:14:47.585407   25219 start.go:92] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220801171441-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-2022080117144
1-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:14:47.585451   25219 start.go:132] createHost starting for "" (driver="docker")
	I0801 17:14:47.628423   25219 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0801 17:14:47.628748   25219 start.go:166] libmachine.API.Create for "kubernetes-upgrade-20220801171441-13911" (driver="docker")
	I0801 17:14:47.628795   25219 client.go:168] LocalClient.Create starting
	I0801 17:14:47.628970   25219 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem
	I0801 17:14:47.629038   25219 main.go:134] libmachine: Decoding PEM data...
	I0801 17:14:47.629064   25219 main.go:134] libmachine: Parsing certificate...
	I0801 17:14:47.629153   25219 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem
	I0801 17:14:47.629206   25219 main.go:134] libmachine: Decoding PEM data...
	I0801 17:14:47.629223   25219 main.go:134] libmachine: Parsing certificate...
	I0801 17:14:47.630118   25219 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220801171441-13911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0801 17:14:47.695522   25219 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220801171441-13911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0801 17:14:47.695614   25219 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220801171441-13911] to gather additional debugging logs...
	I0801 17:14:47.695632   25219 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220801171441-13911
	W0801 17:14:47.758189   25219 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220801171441-13911 returned with exit code 1
	I0801 17:14:47.758214   25219 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220801171441-13911]: docker network inspect kubernetes-upgrade-20220801171441-13911: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220801171441-13911
	I0801 17:14:47.758238   25219 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220801171441-13911]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220801171441-13911
	
	** /stderr **
	I0801 17:14:47.758332   25219 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0801 17:14:47.821832   25219 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000ca04a8] misses:0}
	I0801 17:14:47.821871   25219 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:14:47.821891   25219 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220801171441-13911 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0801 17:14:47.821982   25219 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 kubernetes-upgrade-20220801171441-13911
	W0801 17:14:47.885251   25219 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 kubernetes-upgrade-20220801171441-13911 returned with exit code 1
	W0801 17:14:47.885289   25219 network_create.go:107] failed to create docker network kubernetes-upgrade-20220801171441-13911 192.168.49.0/24, will retry: subnet is taken
	I0801 17:14:47.885560   25219 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ca04a8] amended:false}} dirty:map[] misses:0}
	I0801 17:14:47.885577   25219 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:14:47.885778   25219 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ca04a8] amended:true}} dirty:map[192.168.49.0:0xc000ca04a8 192.168.58.0:0xc0001a4280] misses:0}
	I0801 17:14:47.885795   25219 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:14:47.885803   25219 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220801171441-13911 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0801 17:14:47.885874   25219 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 kubernetes-upgrade-20220801171441-13911
	W0801 17:14:47.948659   25219 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 kubernetes-upgrade-20220801171441-13911 returned with exit code 1
	W0801 17:14:47.948693   25219 network_create.go:107] failed to create docker network kubernetes-upgrade-20220801171441-13911 192.168.58.0/24, will retry: subnet is taken
	I0801 17:14:47.948989   25219 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ca04a8] amended:true}} dirty:map[192.168.49.0:0xc000ca04a8 192.168.58.0:0xc0001a4280] misses:1}
	I0801 17:14:47.949010   25219 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:14:47.949207   25219 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ca04a8] amended:true}} dirty:map[192.168.49.0:0xc000ca04a8 192.168.58.0:0xc0001a4280 192.168.67.0:0xc0001a4388] misses:1}
	I0801 17:14:47.949221   25219 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:14:47.949231   25219 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220801171441-13911 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0801 17:14:47.949309   25219 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 kubernetes-upgrade-20220801171441-13911
	W0801 17:14:48.011766   25219 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 kubernetes-upgrade-20220801171441-13911 returned with exit code 1
	W0801 17:14:48.011804   25219 network_create.go:107] failed to create docker network kubernetes-upgrade-20220801171441-13911 192.168.67.0/24, will retry: subnet is taken
	I0801 17:14:48.012091   25219 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ca04a8] amended:true}} dirty:map[192.168.49.0:0xc000ca04a8 192.168.58.0:0xc0001a4280 192.168.67.0:0xc0001a4388] misses:2}
	I0801 17:14:48.012115   25219 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:14:48.012337   25219 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ca04a8] amended:true}} dirty:map[192.168.49.0:0xc000ca04a8 192.168.58.0:0xc0001a4280 192.168.67.0:0xc0001a4388 192.168.76.0:0xc000ca04e0] misses:2}
	I0801 17:14:48.012365   25219 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:14:48.012373   25219 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220801171441-13911 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0801 17:14:48.012428   25219 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 kubernetes-upgrade-20220801171441-13911
	I0801 17:14:48.106850   25219 network_create.go:99] docker network kubernetes-upgrade-20220801171441-13911 192.168.76.0/24 created
	I0801 17:14:48.106888   25219 kic.go:106] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-20220801171441-13911" container
	I0801 17:14:48.106970   25219 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0801 17:14:48.173839   25219 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220801171441-13911 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 --label created_by.minikube.sigs.k8s.io=true
	I0801 17:14:48.236756   25219 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220801171441-13911
	I0801 17:14:48.236892   25219 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220801171441-13911-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220801171441-13911:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib
	I0801 17:14:48.688317   25219 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220801171441-13911
	I0801 17:14:48.688358   25219 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:14:48.688400   25219 kic.go:179] Starting extracting preloaded images to volume ...
	I0801 17:14:48.688497   25219 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220801171441-13911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0801 17:14:52.514711   25219 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220801171441-13911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.826116386s)
	I0801 17:14:52.514744   25219 kic.go:188] duration metric: took 3.826299 seconds to extract preloaded images to volume
	I0801 17:14:52.514831   25219 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0801 17:14:52.647330   25219 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220801171441-13911 --name kubernetes-upgrade-20220801171441-13911 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220801171441-13911 --network kubernetes-upgrade-20220801171441-13911 --ip 192.168.76.2 --volume kubernetes-upgrade-20220801171441-13911:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8
	I0801 17:14:53.014452   25219 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801171441-13911 --format={{.State.Running}}
	I0801 17:14:53.086322   25219 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801171441-13911 --format={{.State.Status}}
	I0801 17:14:53.160815   25219 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220801171441-13911 stat /var/lib/dpkg/alternatives/iptables
	I0801 17:14:53.289264   25219 oci.go:144] the created container "kubernetes-upgrade-20220801171441-13911" has a running status.
	I0801 17:14:53.289291   25219 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa...
	I0801 17:14:53.403287   25219 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0801 17:14:53.516661   25219 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801171441-13911 --format={{.State.Status}}
	I0801 17:14:53.587362   25219 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0801 17:14:53.587387   25219 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220801171441-13911 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0801 17:14:53.704817   25219 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801171441-13911 --format={{.State.Status}}
	I0801 17:14:53.776990   25219 machine.go:88] provisioning docker machine ...
	I0801 17:14:53.777042   25219 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220801171441-13911"
	I0801 17:14:53.777147   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:53.849941   25219 main.go:134] libmachine: Using SSH client type: native
	I0801 17:14:53.850117   25219 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63327 <nil> <nil>}
	I0801 17:14:53.850132   25219 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220801171441-13911 && echo "kubernetes-upgrade-20220801171441-13911" | sudo tee /etc/hostname
	I0801 17:14:53.974188   25219 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220801171441-13911
	
	I0801 17:14:53.974295   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:54.052135   25219 main.go:134] libmachine: Using SSH client type: native
	I0801 17:14:54.052297   25219 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63327 <nil> <nil>}
	I0801 17:14:54.052312   25219 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220801171441-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220801171441-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220801171441-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:14:54.164440   25219 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:14:54.164462   25219 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:14:54.164488   25219 ubuntu.go:177] setting up certificates
	I0801 17:14:54.164496   25219 provision.go:83] configureAuth start
	I0801 17:14:54.164563   25219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:54.237327   25219 provision.go:138] copyHostCerts
	I0801 17:14:54.237410   25219 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:14:54.237420   25219 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:14:54.237529   25219 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:14:54.237714   25219 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:14:54.237723   25219 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:14:54.237786   25219 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:14:54.237932   25219 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:14:54.237938   25219 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:14:54.237999   25219 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:14:54.238114   25219 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220801171441-13911 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220801171441-13911]
	I0801 17:14:54.371852   25219 provision.go:172] copyRemoteCerts
	I0801 17:14:54.371914   25219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:14:54.371974   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:54.446929   25219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63327 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:14:54.532409   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:14:54.550931   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:14:54.567441   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0801 17:14:54.584094   25219 provision.go:86] duration metric: configureAuth took 419.58086ms
	I0801 17:14:54.584109   25219 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:14:54.584246   25219 config.go:180] Loaded profile config "kubernetes-upgrade-20220801171441-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0801 17:14:54.584297   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:54.658070   25219 main.go:134] libmachine: Using SSH client type: native
	I0801 17:14:54.658229   25219 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63327 <nil> <nil>}
	I0801 17:14:54.658245   25219 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:14:54.770310   25219 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:14:54.770321   25219 ubuntu.go:71] root file system type: overlay
	I0801 17:14:54.770465   25219 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:14:54.770539   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:54.847398   25219 main.go:134] libmachine: Using SSH client type: native
	I0801 17:14:54.847557   25219 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63327 <nil> <nil>}
	I0801 17:14:54.847607   25219 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:14:54.969219   25219 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:14:54.969305   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:55.043382   25219 main.go:134] libmachine: Using SSH client type: native
	I0801 17:14:55.043534   25219 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63327 <nil> <nil>}
	I0801 17:14:55.043547   25219 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:14:55.618086   25219 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:14:54.978378976 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0801 17:14:55.618109   25219 machine.go:91] provisioned docker machine in 1.841077215s
	I0801 17:14:55.618115   25219 client.go:171] LocalClient.Create took 7.989222459s
	I0801 17:14:55.618131   25219 start.go:174] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220801171441-13911" took 7.989294086s
	I0801 17:14:55.618140   25219 start.go:307] post-start starting for "kubernetes-upgrade-20220801171441-13911" (driver="docker")
	I0801 17:14:55.618145   25219 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:14:55.618226   25219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:14:55.618283   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:55.690350   25219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63327 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:14:55.779192   25219 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:14:55.782776   25219 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:14:55.782791   25219 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:14:55.782801   25219 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:14:55.782810   25219 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:14:55.782820   25219 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:14:55.782927   25219 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:14:55.783063   25219 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:14:55.783200   25219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:14:55.790141   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:14:55.812326   25219 start.go:310] post-start completed in 194.170976ms
	I0801 17:14:55.812850   25219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:55.884312   25219 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/config.json ...
	I0801 17:14:55.884699   25219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:14:55.884748   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:55.955529   25219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63327 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:14:56.038920   25219 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:14:56.043298   25219 start.go:135] duration metric: createHost completed in 8.457744806s
	I0801 17:14:56.043312   25219 start.go:82] releasing machines lock for "kubernetes-upgrade-20220801171441-13911", held for 8.457825017s
	I0801 17:14:56.043383   25219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:56.114183   25219 ssh_runner.go:195] Run: systemctl --version
	I0801 17:14:56.114186   25219 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:14:56.114244   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:56.114255   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:56.191709   25219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63327 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:14:56.192878   25219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63327 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:14:56.466311   25219 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:14:56.475936   25219 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:14:56.476002   25219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:14:56.485048   25219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:14:56.498261   25219 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:14:56.563320   25219 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:14:56.633708   25219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:14:56.700110   25219 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:14:56.898186   25219 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:14:56.932250   25219 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:14:57.021307   25219 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0801 17:14:57.021544   25219 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220801171441-13911 dig +short host.docker.internal
	I0801 17:14:57.150857   25219 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:14:57.153152   25219 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:14:57.157111   25219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:14:57.166244   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:14:57.237433   25219 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:14:57.237504   25219 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:14:57.266396   25219 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:14:57.266412   25219 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:14:57.266498   25219 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:14:57.294996   25219 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:14:57.295013   25219 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:14:57.295091   25219 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:14:57.366520   25219 cni.go:95] Creating CNI manager for ""
	I0801 17:14:57.366532   25219 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:14:57.366543   25219 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:14:57.366558   25219 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220801171441-13911 NodeName:kubernetes-upgrade-20220801171441-13911 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:14:57.366658   25219 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220801171441-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220801171441-13911
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:14:57.366733   25219 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220801171441-13911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220801171441-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:14:57.366794   25219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0801 17:14:57.374208   25219 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:14:57.374259   25219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:14:57.381367   25219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0801 17:14:57.393448   25219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:14:57.405906   25219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0801 17:14:57.417927   25219 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:14:57.421572   25219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:14:57.430680   25219 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911 for IP: 192.168.76.2
	I0801 17:14:57.430799   25219 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:14:57.430846   25219 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:14:57.430885   25219 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.key
	I0801 17:14:57.430900   25219 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.crt with IP's: []
	I0801 17:14:57.480738   25219 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.crt ...
	I0801 17:14:57.480747   25219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.crt: {Name:mk0b1cfa20fe44e5008274a94af136066025a5c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:14:57.481061   25219 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.key ...
	I0801 17:14:57.481069   25219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.key: {Name:mkf82f4cff35485a3e4aaa30be53a171543a8104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:14:57.481267   25219 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.key.31bdca25
	I0801 17:14:57.481283   25219 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0801 17:14:57.586920   25219 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.crt.31bdca25 ...
	I0801 17:14:57.586930   25219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.crt.31bdca25: {Name:mka99f0390289d3186819d827006198015659ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:14:57.587176   25219 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.key.31bdca25 ...
	I0801 17:14:57.587186   25219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.key.31bdca25: {Name:mk61d637dc7f3d2dc95ede20b51c79a2ad21bbce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:14:57.587388   25219 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.crt
	I0801 17:14:57.587555   25219 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.key
	I0801 17:14:57.587721   25219 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.key
	I0801 17:14:57.587736   25219 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.crt with IP's: []
	I0801 17:14:58.004402   25219 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.crt ...
	I0801 17:14:58.004417   25219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.crt: {Name:mkc6b56f96ee71eb215ade31b0c43b5ef1db4ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:14:58.004678   25219 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.key ...
	I0801 17:14:58.004686   25219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.key: {Name:mkc1aba755b0140788da5bf0dbc08389c8223949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:14:58.005061   25219 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:14:58.005105   25219 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:14:58.005113   25219 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:14:58.005140   25219 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:14:58.005169   25219 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:14:58.005199   25219 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:14:58.005267   25219 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:14:58.005775   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:14:58.023094   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 17:14:58.039732   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:14:58.056227   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:14:58.072882   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:14:58.089292   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:14:58.106423   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:14:58.123594   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:14:58.140197   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:14:58.157359   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:14:58.173926   25219 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:14:58.190552   25219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:14:58.203269   25219 ssh_runner.go:195] Run: openssl version
	I0801 17:14:58.208667   25219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:14:58.216633   25219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:14:58.220384   25219 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:14:58.220430   25219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:14:58.225677   25219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:14:58.233243   25219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:14:58.240703   25219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:14:58.244703   25219 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:14:58.244742   25219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:14:58.249822   25219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:14:58.257453   25219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:14:58.265166   25219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:14:58.269057   25219 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:14:58.269093   25219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:14:58.274175   25219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:14:58.281674   25219 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220801171441-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220801171441-13911 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
}
	I0801 17:14:58.281770   25219 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:14:58.311110   25219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:14:58.319149   25219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:14:58.326415   25219 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:14:58.326464   25219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:14:58.333770   25219 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:14:58.333792   25219 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:14:59.056372   25219 out.go:204]   - Generating certificates and keys ...
	I0801 17:15:01.174493   25219 out.go:204]   - Booting up control plane ...
	W0801 17:16:56.092176   25219 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220801171441-13911 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220801171441-13911 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220801171441-13911 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220801171441-13911 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 17:16:56.092209   25219 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 17:16:56.518415   25219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:16:56.531381   25219 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:16:56.531445   25219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:16:56.540994   25219 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:16:56.541023   25219 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:16:57.484041   25219 out.go:204]   - Generating certificates and keys ...
	I0801 17:16:58.321451   25219 out.go:204]   - Booting up control plane ...
	I0801 17:18:53.243849   25219 kubeadm.go:397] StartCluster complete in 3m54.959501601s
	I0801 17:18:53.243927   25219 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:18:53.271705   25219 logs.go:274] 0 containers: []
	W0801 17:18:53.271721   25219 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:18:53.271786   25219 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:18:53.301529   25219 logs.go:274] 0 containers: []
	W0801 17:18:53.301543   25219 logs.go:276] No container was found matching "etcd"
	I0801 17:18:53.301603   25219 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:18:53.331473   25219 logs.go:274] 0 containers: []
	W0801 17:18:53.331494   25219 logs.go:276] No container was found matching "coredns"
	I0801 17:18:53.331572   25219 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:18:53.362142   25219 logs.go:274] 0 containers: []
	W0801 17:18:53.362156   25219 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:18:53.362222   25219 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:18:53.390377   25219 logs.go:274] 0 containers: []
	W0801 17:18:53.390396   25219 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:18:53.390455   25219 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:18:53.417871   25219 logs.go:274] 0 containers: []
	W0801 17:18:53.417885   25219 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:18:53.417944   25219 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:18:53.446218   25219 logs.go:274] 0 containers: []
	W0801 17:18:53.446230   25219 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:18:53.446288   25219 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:18:53.474189   25219 logs.go:274] 0 containers: []
	W0801 17:18:53.474202   25219 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:18:53.474213   25219 logs.go:123] Gathering logs for Docker ...
	I0801 17:18:53.474225   25219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:18:53.490275   25219 logs.go:123] Gathering logs for container status ...
	I0801 17:18:53.490289   25219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:18:55.542662   25219 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052337014s)
	I0801 17:18:55.542830   25219 logs.go:123] Gathering logs for kubelet ...
	I0801 17:18:55.542840   25219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:18:55.583876   25219 logs.go:123] Gathering logs for dmesg ...
	I0801 17:18:55.583897   25219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:18:55.599911   25219 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:18:55.599927   25219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:18:55.655353   25219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0801 17:18:55.655370   25219 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 17:18:55.655384   25219 out.go:239] * 
	* 
	W0801 17:18:55.655497   25219 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:18:55.655513   25219 out.go:239] * 
	* 
	W0801 17:18:55.656092   25219 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 17:18:55.735020   25219 out.go:177] 
	W0801 17:18:55.778310   25219 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:18:55.778481   25219 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 17:18:55.778594   25219 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 17:18:55.821890   25219 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220801171441-13911
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220801171441-13911: (1.649871492s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220801171441-13911 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220801171441-13911 status --format={{.Host}}: exit status 7 (118.545567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker : (4m33.436898654s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220801171441-13911 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (436.874756ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220801171441-13911] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.24.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220801171441-13911
	    minikube start -p kubernetes-upgrade-20220801171441-13911 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220801171441-139112 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.24.3, by running:
	    
	    minikube start -p kubernetes-upgrade-20220801171441-13911 --kubernetes-version=v1.24.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220801171441-13911 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker : (20.581750946s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-08-01 17:23:52.251267 -0700 PDT m=+2995.579696136
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220801171441-13911
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220801171441-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd110c01b562fa7162f2689cc370b5c644f6f10c9c19a80c6bb9f6c505ea66b1",
	        "Created": "2022-08-02T00:14:52.721058848Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161600,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:18:59.011368203Z",
	            "FinishedAt": "2022-08-02T00:18:56.4089793Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/dd110c01b562fa7162f2689cc370b5c644f6f10c9c19a80c6bb9f6c505ea66b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd110c01b562fa7162f2689cc370b5c644f6f10c9c19a80c6bb9f6c505ea66b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd110c01b562fa7162f2689cc370b5c644f6f10c9c19a80c6bb9f6c505ea66b1/hosts",
	        "LogPath": "/var/lib/docker/containers/dd110c01b562fa7162f2689cc370b5c644f6f10c9c19a80c6bb9f6c505ea66b1/dd110c01b562fa7162f2689cc370b5c644f6f10c9c19a80c6bb9f6c505ea66b1-json.log",
	        "Name": "/kubernetes-upgrade-20220801171441-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220801171441-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220801171441-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/418bb96bd243e25905906190cf2ec58d057566d05e58b4a668003d92e63be7e8-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/418bb96bd243e25905906190cf2ec58d057566d05e58b4a668003d92e63be7e8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/418bb96bd243e25905906190cf2ec58d057566d05e58b4a668003d92e63be7e8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/418bb96bd243e25905906190cf2ec58d057566d05e58b4a668003d92e63be7e8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220801171441-13911",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220801171441-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220801171441-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220801171441-13911",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220801171441-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d150ad7b1b826a966ea70989986f1a19079ac7e39602e3f86f08ea21f18fbd25",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64040"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64041"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64042"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64043"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64044"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d150ad7b1b82",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220801171441-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dd110c01b562",
	                        "kubernetes-upgrade-20220801171441-13911"
	                    ],
	                    "NetworkID": "c55fc61be842634c8c7358995a21f39a9f4f89adecdf560b48fcd08bc8964c85",
	                    "EndpointID": "eaf2d63f98a6532527a933c80c409be616ae9a97923803aa61c0753a159a3176",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220801171441-13911 -n kubernetes-upgrade-20220801171441-13911
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220801171441-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220801171441-13911 logs -n 25: (4.09548942s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-20220801171654-13911           | pause-20220801171654-13911              | jenkins | v1.26.0 | 01 Aug 22 17:18 PDT | 01 Aug 22 17:18 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| stop    | -p                                      | kubernetes-upgrade-20220801171441-13911 | jenkins | v1.26.0 | 01 Aug 22 17:18 PDT | 01 Aug 22 17:18 PDT |
	|         | kubernetes-upgrade-20220801171441-13911 |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220801171441-13911 | jenkins | v1.26.0 | 01 Aug 22 17:18 PDT | 01 Aug 22 17:23 PDT |
	|         | kubernetes-upgrade-20220801171441-13911 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| delete  | -p pause-20220801171654-13911           | pause-20220801171654-13911              | jenkins | v1.26.0 | 01 Aug 22 17:19 PDT | 01 Aug 22 17:19 PDT |
	| start   | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:19 PDT |                     |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	|         | --no-kubernetes                         |                                         |         |         |                     |                     |
	|         | --kubernetes-version=1.20               |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:19 PDT | 01 Aug 22 17:19 PDT |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:19 PDT | 01 Aug 22 17:20 PDT |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker         |                                         |         |         |                     |                     |
	| delete  | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT | 01 Aug 22 17:20 PDT |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT | 01 Aug 22 17:20 PDT |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker         |                                         |         |         |                     |                     |
	| ssh     | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT |                     |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	|         | sudo systemctl is-active --quiet        |                                         |         |         |                     |                     |
	|         | service kubelet                         |                                         |         |         |                     |                     |
	| profile | list                                    | minikube                                | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT | 01 Aug 22 17:20 PDT |
	| profile | list --output=json                      | minikube                                | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT | 01 Aug 22 17:20 PDT |
	| stop    | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT | 01 Aug 22 17:20 PDT |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT | 01 Aug 22 17:20 PDT |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT |                     |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	|         | sudo systemctl is-active --quiet        |                                         |         |         |                     |                     |
	|         | service kubelet                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | NoKubernetes-20220801171923-13911       | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT | 01 Aug 22 17:20 PDT |
	|         | NoKubernetes-20220801171923-13911       |                                         |         |         |                     |                     |
	| start   | -p auto-20220801171037-13911            | auto-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:20 PDT | 01 Aug 22 17:21 PDT |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | -p auto-20220801171037-13911            | auto-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:21 PDT | 01 Aug 22 17:21 PDT |
	|         | pgrep -a kubelet                        |                                         |         |         |                     |                     |
	| delete  | -p auto-20220801171037-13911            | auto-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:21 PDT | 01 Aug 22 17:21 PDT |
	| start   | -p                                      | kindnet-20220801171038-13911            | jenkins | v1.26.0 | 01 Aug 22 17:21 PDT | 01 Aug 22 17:22 PDT |
	|         | kindnet-20220801171038-13911            |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker           |                                         |         |         |                     |                     |
	| ssh     | -p                                      | kindnet-20220801171038-13911            | jenkins | v1.26.0 | 01 Aug 22 17:22 PDT | 01 Aug 22 17:22 PDT |
	|         | kindnet-20220801171038-13911            |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                        |                                         |         |         |                     |                     |
	| delete  | -p                                      | kindnet-20220801171038-13911            | jenkins | v1.26.0 | 01 Aug 22 17:23 PDT | 01 Aug 22 17:23 PDT |
	|         | kindnet-20220801171038-13911            |                                         |         |         |                     |                     |
	| start   | -p cilium-20220801171038-13911          | cilium-20220801171038-13911             | jenkins | v1.26.0 | 01 Aug 22 17:23 PDT |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true           |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium          |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220801171441-13911 | jenkins | v1.26.0 | 01 Aug 22 17:23 PDT |                     |
	|         | kubernetes-upgrade-20220801171441-13911 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220801171441-13911 | jenkins | v1.26.0 | 01 Aug 22 17:23 PDT | 01 Aug 22 17:23 PDT |
	|         | kubernetes-upgrade-20220801171441-13911 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:23:31
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:23:31.717231   28358 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:23:31.717465   28358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:23:31.717471   28358 out.go:309] Setting ErrFile to fd 2...
	I0801 17:23:31.717475   28358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:23:31.717578   28358 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:23:31.718041   28358 out.go:303] Setting JSON to false
	I0801 17:23:31.733107   28358 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8582,"bootTime":1659391229,"procs":378,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:23:31.733233   28358 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:23:31.754495   28358 out.go:177] * [kubernetes-upgrade-20220801171441-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:23:31.812675   28358 notify.go:193] Checking for updates...
	I0801 17:23:31.834754   28358 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:23:31.893322   28358 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:23:31.951610   28358 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:23:31.972667   28358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:23:31.993681   28358 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:23:32.015150   28358 config.go:180] Loaded profile config "kubernetes-upgrade-20220801171441-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:23:32.015781   28358 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:23:32.086910   28358 docker.go:137] docker version: linux-20.10.17
	I0801 17:23:32.087024   28358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:23:32.221268   28358 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:56 SystemTime:2022-08-02 00:23:32.16241465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:23:32.259051   28358 out.go:177] * Using the docker driver based on existing profile
	I0801 17:23:32.300757   28358 start.go:284] selected driver: docker
	I0801 17:23:32.300780   28358 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220801171441-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220801171441-
13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:23:32.300898   28358 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:23:32.303140   28358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:23:32.437997   28358 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:56 SystemTime:2022-08-02 00:23:32.380678994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:23:32.438154   28358 cni.go:95] Creating CNI manager for ""
	I0801 17:23:32.438168   28358 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:23:32.438191   28358 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220801171441-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220801171441-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:23:32.461707   28358 out.go:177] * Starting control plane node kubernetes-upgrade-20220801171441-13911 in cluster kubernetes-upgrade-20220801171441-13911
	I0801 17:23:32.482541   28358 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:23:32.503710   28358 out.go:177] * Pulling base image ...
	I0801 17:23:32.545610   28358 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:23:32.545631   28358 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:23:32.545672   28358 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 17:23:32.545691   28358 cache.go:57] Caching tarball of preloaded images
	I0801 17:23:32.545874   28358 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:23:32.545895   28358 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0801 17:23:32.546798   28358 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/config.json ...
	I0801 17:23:32.611351   28358 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:23:32.611367   28358 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:23:32.611381   28358 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:23:32.611438   28358 start.go:371] acquiring machines lock for kubernetes-upgrade-20220801171441-13911: {Name:mkbf1520452a5fe9d7e151681aae23add69731c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:23:32.611534   28358 start.go:375] acquired machines lock for "kubernetes-upgrade-20220801171441-13911" in 75.978µs
	I0801 17:23:32.611559   28358 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:23:32.611568   28358 fix.go:55] fixHost starting: 
	I0801 17:23:32.611818   28358 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801171441-13911 --format={{.State.Status}}
	I0801 17:23:32.682725   28358 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220801171441-13911: state=Running err=<nil>
	W0801 17:23:32.682751   28358 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:23:32.704687   28358 out.go:177] * Updating the running docker "kubernetes-upgrade-20220801171441-13911" container ...
	I0801 17:23:32.725349   28358 machine.go:88] provisioning docker machine ...
	I0801 17:23:32.725372   28358 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220801171441-13911"
	I0801 17:23:32.725438   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:32.798726   28358 main.go:134] libmachine: Using SSH client type: native
	I0801 17:23:32.798905   28358 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64040 <nil> <nil>}
	I0801 17:23:32.798919   28358 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220801171441-13911 && echo "kubernetes-upgrade-20220801171441-13911" | sudo tee /etc/hostname
	I0801 17:23:32.920529   28358 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220801171441-13911
	
	I0801 17:23:32.920625   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:32.998755   28358 main.go:134] libmachine: Using SSH client type: native
	I0801 17:23:32.998974   28358 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64040 <nil> <nil>}
	I0801 17:23:32.999008   28358 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220801171441-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220801171441-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220801171441-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:23:33.112432   28358 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:23:33.112455   28358 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:23:33.112474   28358 ubuntu.go:177] setting up certificates
	I0801 17:23:33.112488   28358 provision.go:83] configureAuth start
	I0801 17:23:33.112566   28358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:33.188292   28358 provision.go:138] copyHostCerts
	I0801 17:23:33.188417   28358 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:23:33.188427   28358 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:23:33.188556   28358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:23:33.188831   28358 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:23:33.188841   28358 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:23:33.188928   28358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:23:33.189127   28358 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:23:33.189135   28358 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:23:33.189215   28358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:23:33.189379   28358 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220801171441-13911 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220801171441-13911]
	I0801 17:23:33.249079   28358 provision.go:172] copyRemoteCerts
	I0801 17:23:33.249141   28358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:23:33.249186   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:33.329849   28358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64040 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:23:33.413892   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:23:33.432226   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0801 17:23:33.448858   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:23:33.465446   28358 provision.go:86] duration metric: configureAuth took 352.940062ms
	I0801 17:23:33.465461   28358 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:23:33.465628   28358 config.go:180] Loaded profile config "kubernetes-upgrade-20220801171441-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:23:33.465689   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:33.549379   28358 main.go:134] libmachine: Using SSH client type: native
	I0801 17:23:33.549526   28358 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64040 <nil> <nil>}
	I0801 17:23:33.549536   28358 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:23:33.669822   28358 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:23:33.669843   28358 ubuntu.go:71] root file system type: overlay
	I0801 17:23:33.670049   28358 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:23:33.670121   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:33.749879   28358 main.go:134] libmachine: Using SSH client type: native
	I0801 17:23:33.750025   28358 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64040 <nil> <nil>}
	I0801 17:23:33.750072   28358 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:23:33.870869   28358 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:23:33.870970   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:33.954632   28358 main.go:134] libmachine: Using SSH client type: native
	I0801 17:23:33.954823   28358 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64040 <nil> <nil>}
	I0801 17:23:33.954840   28358 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:23:34.079295   28358 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:23:34.079309   28358 machine.go:91] provisioned docker machine in 1.353938554s
	I0801 17:23:34.079316   28358 start.go:307] post-start starting for "kubernetes-upgrade-20220801171441-13911" (driver="docker")
	I0801 17:23:34.079328   28358 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:23:34.079398   28358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:23:34.079441   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:34.156053   28358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64040 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:23:34.241213   28358 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:23:34.245292   28358 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:23:34.245306   28358 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:23:34.245314   28358 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:23:34.245318   28358 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:23:34.245328   28358 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:23:34.245432   28358 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:23:34.245561   28358 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:23:34.245711   28358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:23:34.252451   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:23:34.270929   28358 start.go:310] post-start completed in 191.596248ms
	I0801 17:23:34.271010   28358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:23:34.271060   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:34.352324   28358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64040 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:23:34.431882   28358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:23:34.436336   28358 fix.go:57] fixHost completed within 1.824743987s
	I0801 17:23:34.436351   28358 start.go:82] releasing machines lock for "kubernetes-upgrade-20220801171441-13911", held for 1.824788766s
	I0801 17:23:34.436443   28358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:34.511575   28358 ssh_runner.go:195] Run: systemctl --version
	I0801 17:23:34.511580   28358 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:23:34.511639   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:34.511651   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:34.600866   28358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64040 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:23:34.602725   28358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64040 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:23:34.688151   28358 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:23:34.872595   28358 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:23:34.872660   28358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:23:34.882553   28358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:23:34.896008   28358 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:23:34.983348   28358 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:23:35.075595   28358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:23:35.161440   28358 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:23:37.639433   28358 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.477945952s)
	I0801 17:23:37.639516   28358 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:23:37.724342   28358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:23:37.869963   28358 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:23:37.880502   28358 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:23:37.880615   28358 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:23:37.885449   28358 start.go:471] Will wait 60s for crictl version
	I0801 17:23:37.885512   28358 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:23:37.985441   28358 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:23:37.985518   28358 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:23:38.102376   28358 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:23:38.913150   28225 out.go:204]   - Configuring RBAC rules ...
	I0801 17:23:39.290766   28225 cni.go:95] Creating CNI manager for "cilium"
	I0801 17:23:39.329637   28225 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0801 17:23:39.365657   28225 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0801 17:23:39.395169   28225 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I0801 17:23:39.395180   28225 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I0801 17:23:39.395210   28225 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0801 17:23:39.395255   28225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0801 17:23:39.395295   28225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I0801 17:23:39.415649   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0801 17:23:40.129428   28225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:23:40.129514   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:40.129520   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=cilium-20220801171038-13911 minikube.k8s.io/updated_at=2022_08_01T17_23_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:40.140703   28225 ops.go:34] apiserver oom_adj: -16
	I0801 17:23:40.224369   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:38.198106   28358 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:23:38.198241   28358 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220801171441-13911 dig +short host.docker.internal
	I0801 17:23:38.379043   28358 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:23:38.379141   28358 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:23:38.384273   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:38.462465   28358 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:23:38.462547   28358 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:23:38.498705   28358 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0801 17:23:38.498727   28358 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:23:38.498816   28358 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:23:38.574408   28358 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0801 17:23:38.574428   28358 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:23:38.574504   28358 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:23:38.705656   28358 cni.go:95] Creating CNI manager for ""
	I0801 17:23:38.705674   28358 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:23:38.705695   28358 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:23:38.705710   28358 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220801171441-13911 NodeName:kubernetes-upgrade-20220801171441-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:system
d ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:23:38.705875   28358 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-20220801171441-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:23:38.705987   28358 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-20220801171441-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220801171441-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:23:38.706050   28358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:23:38.721308   28358 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:23:38.721390   28358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:23:38.765287   28358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (501 bytes)
	I0801 17:23:38.782622   28358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:23:38.796818   28358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0801 17:23:38.811187   28358 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:23:38.815595   28358 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911 for IP: 192.168.76.2
	I0801 17:23:38.815706   28358 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:23:38.815795   28358 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:23:38.815887   28358 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.key
	I0801 17:23:38.815946   28358 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.key.31bdca25
	I0801 17:23:38.815996   28358 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.key
	I0801 17:23:38.816206   28358 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:23:38.816244   28358 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:23:38.816259   28358 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:23:38.816294   28358 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:23:38.816327   28358 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:23:38.816360   28358 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:23:38.816428   28358 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:23:38.816979   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:23:38.835285   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 17:23:38.864408   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:23:38.887432   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:23:38.905691   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:23:38.925244   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:23:38.943015   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:23:38.967940   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:23:38.988511   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:23:39.013368   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:23:39.034430   28358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:23:39.056425   28358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:23:39.070804   28358 ssh_runner.go:195] Run: openssl version
	I0801 17:23:39.077696   28358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:23:39.087169   28358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:23:39.091730   28358 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:23:39.091778   28358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:23:39.097648   28358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:23:39.108107   28358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:23:39.123181   28358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:23:39.128266   28358 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:23:39.128320   28358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:23:39.134876   28358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:23:39.144445   28358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:23:39.157672   28358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:23:39.162429   28358 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:23:39.162483   28358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:23:39.169063   28358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:23:39.179547   28358 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220801171441-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220801171441-13911 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:23:39.179687   28358 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:23:39.222245   28358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:23:39.232573   28358 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:23:39.232597   28358 kubeadm.go:626] restartCluster start
	I0801 17:23:39.232660   28358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:23:39.248484   28358 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:23:39.248603   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:39.384325   28358 kubeconfig.go:92] found "kubernetes-upgrade-20220801171441-13911" server: "https://127.0.0.1:64044"
	I0801 17:23:39.384803   28358 kapi.go:59] client config for kubernetes-upgrade-20220801171441-13911: &rest.Config{Host:"https://127.0.0.1:64044", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kuber
netes-upgrade-20220801171441-13911/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22ff6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0801 17:23:39.385419   28358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:23:39.395890   28358 api_server.go:165] Checking apiserver status ...
	I0801 17:23:39.395958   28358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:23:39.408490   28358 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/10822/cgroup
	W0801 17:23:39.424699   28358 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/10822/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:23:39.424716   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:41.695404   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:23:41.695438   28358 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:64044/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:23:40.787414   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:41.287426   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:41.787026   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:42.287465   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:42.787057   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:43.289035   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:43.787404   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:44.288512   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:44.787672   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:45.287170   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:41.958677   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:41.965965   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:23:41.965985   28358 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:64044/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:23:42.347441   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:42.352873   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:23:42.352890   28358 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:64044/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:23:42.777621   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:42.783158   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:23:42.783181   28358 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:64044/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:23:43.257394   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:43.264624   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 200:
	ok
	I0801 17:23:43.278564   28358 system_pods.go:86] 5 kube-system pods found
	I0801 17:23:43.278587   28358 system_pods.go:89] "etcd-kubernetes-upgrade-20220801171441-13911" [0de2b525-61c8-41b0-999f-e618651a17b3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0801 17:23:43.278596   28358 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-20220801171441-13911" [98b6c102-1976-465e-a05d-1984c8655dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0801 17:23:43.278603   28358 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-20220801171441-13911" [a7e2d958-1cd7-40a3-8ff8-d8d9e52afa56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0801 17:23:43.278608   28358 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-20220801171441-13911" [d7e80a65-da96-4965-9e04-e9e2073cc96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:23:43.278614   28358 system_pods.go:89] "storage-provisioner" [69c5664a-211e-4494-bee3-d860352c1759] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0801 17:23:43.278620   28358 kubeadm.go:610] needs reconfigure: missing components: kube-dns, kube-proxy
	I0801 17:23:43.278627   28358 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:23:43.278695   28358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:23:43.319493   28358 docker.go:443] Stopping containers: [78f82b35ff9d 5c33e1d6d3c1 2f1242d99d50 a2610649d6c8 d5b5e2b467f4 ce4989e9ba57 677d9a1f7f9e e2a3da9fec54 5c027bfab708 5c376787dafe 53aa8d52c40d 625c16949e6d 2cb6c3d8218f b921897444eb bf5e29b00e5d 773eec25ca30 0a474053a782 ebb162f00a69]
	I0801 17:23:43.319575   28358 ssh_runner.go:195] Run: docker stop 78f82b35ff9d 5c33e1d6d3c1 2f1242d99d50 a2610649d6c8 d5b5e2b467f4 ce4989e9ba57 677d9a1f7f9e e2a3da9fec54 5c027bfab708 5c376787dafe 53aa8d52c40d 625c16949e6d 2cb6c3d8218f b921897444eb bf5e29b00e5d 773eec25ca30 0a474053a782 ebb162f00a69
	I0801 17:23:43.977297   28358 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:23:44.059925   28358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:23:44.071539   28358 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug  2 00:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2095 Aug  2 00:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:23 /etc/kubernetes/scheduler.conf
	
	I0801 17:23:44.071605   28358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:23:44.080244   28358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:23:44.088924   28358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:23:44.097682   28358 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:23:44.097743   28358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:23:44.106686   28358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:23:44.115658   28358 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:23:44.115713   28358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:23:44.123496   28358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:23:44.166481   28358 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:23:44.166497   28358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:23:44.211599   28358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:23:44.774619   28358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:23:44.975239   28358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:23:45.024939   28358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:23:45.071440   28358 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:23:45.071514   28358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:23:45.583369   28358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:23:46.083133   28358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:23:46.092573   28358 api_server.go:71] duration metric: took 1.021128607s to wait for apiserver process to appear ...
	I0801 17:23:46.092591   28358 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:23:46.092599   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:45.787199   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:46.287956   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:46.787203   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:47.288060   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:47.787455   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:48.287118   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:48.787419   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:49.287561   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:49.789297   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:50.289252   28225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:23:49.436578   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:23:49.436602   28358 api_server.go:102] status: https://127.0.0.1:64044/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:23:49.936780   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:49.943732   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:23:49.943744   28358 api_server.go:102] status: https://127.0.0.1:64044/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:23:50.437150   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:50.442363   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:23:50.442392   28358 api_server.go:102] status: https://127.0.0.1:64044/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:23:50.937215   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:50.944499   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 200:
	ok
	I0801 17:23:50.951761   28358 api_server.go:140] control plane version: v1.24.3
	I0801 17:23:50.951776   28358 api_server.go:130] duration metric: took 4.859126337s to wait for apiserver health ...
	I0801 17:23:50.951782   28358 cni.go:95] Creating CNI manager for ""
	I0801 17:23:50.951786   28358 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:23:50.951790   28358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:23:50.956733   28358 system_pods.go:59] 5 kube-system pods found
	I0801 17:23:50.956746   28358 system_pods.go:61] "etcd-kubernetes-upgrade-20220801171441-13911" [0de2b525-61c8-41b0-999f-e618651a17b3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0801 17:23:50.956756   28358 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220801171441-13911" [98b6c102-1976-465e-a05d-1984c8655dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0801 17:23:50.956764   28358 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220801171441-13911" [a7e2d958-1cd7-40a3-8ff8-d8d9e52afa56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0801 17:23:50.956769   28358 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220801171441-13911" [d7e80a65-da96-4965-9e04-e9e2073cc96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:23:50.956774   28358 system_pods.go:61] "storage-provisioner" [69c5664a-211e-4494-bee3-d860352c1759] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0801 17:23:50.956778   28358 system_pods.go:74] duration metric: took 4.983084ms to wait for pod list to return data ...
	I0801 17:23:50.956792   28358 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:23:50.959534   28358 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:23:50.959547   28358 node_conditions.go:123] node cpu capacity is 6
	I0801 17:23:50.959555   28358 node_conditions.go:105] duration metric: took 2.75894ms to run NodePressure ...
	I0801 17:23:50.959566   28358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:23:51.068979   28358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:23:51.076331   28358 ops.go:34] apiserver oom_adj: -16
	I0801 17:23:51.076343   28358 kubeadm.go:630] restartCluster took 11.84360681s
	I0801 17:23:51.076350   28358 kubeadm.go:397] StartCluster complete in 11.896685421s
	I0801 17:23:51.076361   28358 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:23:51.076431   28358 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:23:51.076863   28358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:23:51.077386   28358 kapi.go:59] client config for kubernetes-upgrade-20220801171441-13911: &rest.Config{Host:"https://127.0.0.1:64044", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kuber
netes-upgrade-20220801171441-13911/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22ff6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0801 17:23:51.079944   28358 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220801171441-13911" rescaled to 1
	I0801 17:23:51.079975   28358 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:23:51.079986   28358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:23:51.079999   28358 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0801 17:23:51.101689   28358 out.go:177] * Verifying Kubernetes components...
	I0801 17:23:51.080147   28358 config.go:180] Loaded profile config "kubernetes-upgrade-20220801171441-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:23:51.101747   28358 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220801171441-13911"
	I0801 17:23:51.101748   28358 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220801171441-13911"
	I0801 17:23:51.132952   28358 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0801 17:23:51.143123   28358 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220801171441-13911"
	I0801 17:23:51.143123   28358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220801171441-13911"
	W0801 17:23:51.143135   28358 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:23:51.143145   28358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:23:51.143176   28358 host.go:66] Checking if "kubernetes-upgrade-20220801171441-13911" exists ...
	I0801 17:23:51.143377   28358 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801171441-13911 --format={{.State.Status}}
	I0801 17:23:51.143473   28358 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801171441-13911 --format={{.State.Status}}
	I0801 17:23:51.154227   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:51.223812   28358 kapi.go:59] client config for kubernetes-upgrade-20220801171441-13911: &rest.Config{Host:"https://127.0.0.1:64044", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801171441-13911/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kuber
netes-upgrade-20220801171441-13911/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22ff6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0801 17:23:51.230297   28358 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220801171441-13911"
	W0801 17:23:51.251013   28358 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:23:51.250992   28358 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:23:51.251031   28358 host.go:66] Checking if "kubernetes-upgrade-20220801171441-13911" exists ...
	I0801 17:23:51.272092   28358 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:23:51.272116   28358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:23:51.272210   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:51.273478   28358 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801171441-13911 --format={{.State.Status}}
	I0801 17:23:51.275426   28358 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:23:51.275899   28358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:23:51.288945   28358 api_server.go:71] duration metric: took 208.949365ms to wait for apiserver process to appear ...
	I0801 17:23:51.288989   28358 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:23:51.289006   28358 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64044/healthz ...
	I0801 17:23:51.295375   28358 api_server.go:266] https://127.0.0.1:64044/healthz returned 200:
	ok
	I0801 17:23:51.296937   28358 api_server.go:140] control plane version: v1.24.3
	I0801 17:23:51.296947   28358 api_server.go:130] duration metric: took 7.951533ms to wait for apiserver health ...
	I0801 17:23:51.296953   28358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:23:51.301334   28358 system_pods.go:59] 5 kube-system pods found
	I0801 17:23:51.301356   28358 system_pods.go:61] "etcd-kubernetes-upgrade-20220801171441-13911" [0de2b525-61c8-41b0-999f-e618651a17b3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0801 17:23:51.301370   28358 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220801171441-13911" [98b6c102-1976-465e-a05d-1984c8655dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0801 17:23:51.301383   28358 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220801171441-13911" [a7e2d958-1cd7-40a3-8ff8-d8d9e52afa56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0801 17:23:51.301393   28358 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220801171441-13911" [d7e80a65-da96-4965-9e04-e9e2073cc96c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:23:51.301402   28358 system_pods.go:61] "storage-provisioner" [69c5664a-211e-4494-bee3-d860352c1759] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0801 17:23:51.301407   28358 system_pods.go:74] duration metric: took 4.449625ms to wait for pod list to return data ...
	I0801 17:23:51.301424   28358 kubeadm.go:572] duration metric: took 221.423002ms to wait for : map[apiserver:true system_pods:true] ...
	I0801 17:23:51.301442   28358 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:23:51.304695   28358 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:23:51.304707   28358 node_conditions.go:123] node cpu capacity is 6
	I0801 17:23:51.304717   28358 node_conditions.go:105] duration metric: took 3.27055ms to run NodePressure ...
	I0801 17:23:51.304728   28358 start.go:216] waiting for startup goroutines ...
	I0801 17:23:51.352870   28358 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:23:51.352883   28358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:23:51.352946   28358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801171441-13911
	I0801 17:23:51.355828   28358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64040 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:23:51.425097   28358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64040 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801171441-13911/id_rsa Username:docker}
	I0801 17:23:51.447808   28358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:23:51.527552   28358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:23:52.037362   28358 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0801 17:23:52.079334   28358 addons.go:414] enableAddons completed in 999.290678ms
	I0801 17:23:52.109852   28358 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:23:52.147230   28358 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220801171441-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:18:59 UTC, end at Tue 2022-08-02 00:23:54 UTC. --
	Aug 02 00:23:36 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:36.938091402Z" level=info msg="Loading containers: start."
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.014988946Z" level=info msg="ignoring event" container=625c16949e6db32689bde12b2c673d15731fbeaaae70fa1bf1f62811e3e5d0f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.016438355Z" level=info msg="ignoring event" container=53aa8d52c40d27defb7042923e2ed2764f3257ef3978cd43cbd34a6858c77349 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.029860945Z" level=info msg="ignoring event" container=5c376787dafea04dafb01118433355b1e62cdd7bff9a6d418a32c98724be1eca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.154001136Z" level=info msg="ignoring event" container=5c027bfab7080a3799367238a03fe4dd6c5557b80dca862eb9408a03c1512c01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.339060136Z" level=info msg="Removing stale sandbox 529b8645f4c1b33812b084672cccc96e4ed90cb2dadd857ac956d0b6e1f4a7ba (625c16949e6db32689bde12b2c673d15731fbeaaae70fa1bf1f62811e3e5d0f3)"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.363259518Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint ad198671a33a0c8886aa335fd08d238cc4a1f047c1c322ddf9e8643f2691ce4b 1ead0ee87c5190f5d95cdb74ad39709dc47f60f6201eaa5ab065c9523197f349], retrying...."
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.465426300Z" level=info msg="Removing stale sandbox 6238a6f94daaac7ad17517354ab23849c47270ac609652cb25c61979516103e8 (53aa8d52c40d27defb7042923e2ed2764f3257ef3978cd43cbd34a6858c77349)"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.466287279Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint ad198671a33a0c8886aa335fd08d238cc4a1f047c1c322ddf9e8643f2691ce4b 90630548ad1d10cbdcf5aa4caa2909c5dbb4afdb7fde6f31cc4529432b444c85], retrying...."
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.565915200Z" level=info msg="Removing stale sandbox 8392c539d4072afc8b5f99746013eea90d084f32e33b63350da4c780d6606ba7 (e2a3da9fec544c5f34afac1882253fe283c559468d22f5cd461d7427b37bc574)"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.567338169Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint ad198671a33a0c8886aa335fd08d238cc4a1f047c1c322ddf9e8643f2691ce4b 63352573e958d4ae273225b7e95d1cc5334dd91ae4aabf0d45b9206848b0daed], retrying...."
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.593828644Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.631967026Z" level=info msg="Loading containers: done."
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.640450448Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.640537843Z" level=info msg="Daemon has completed initialization"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 systemd[1]: Started Docker Application Container Engine.
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.663364558Z" level=info msg="API listen on [::]:2376"
	Aug 02 00:23:37 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:37.669214519Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 02 00:23:43 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:43.392647241Z" level=info msg="ignoring event" container=a2610649d6c83522e222c6da87a441107fc65f9df6edfae0145afbc98af787a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:43 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:43.397950153Z" level=info msg="ignoring event" container=ce4989e9ba571c4c1a39a2f753142750f972f39d6e28f5b8c746547d15adef45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:43 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:43.401491304Z" level=info msg="ignoring event" container=677d9a1f7f9e48d1562823d8ee899ede9fa75f48a10d15ec284642a8c6465331 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:43 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:43.402026297Z" level=info msg="ignoring event" container=d5b5e2b467f47476b64073db6dccc2dcc8631145074d210693ab8b860a64a529 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:43 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:43.468459624Z" level=info msg="ignoring event" container=2f1242d99d50cd822a91c10ed41303a10031c049fbc5b805d8b4ce6fb155386f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:43 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:43.913243771Z" level=info msg="ignoring event" container=5c33e1d6d3c172722621fa0446df7687268a34bfe52597b514d5d0ebc3e40210 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:23:43 kubernetes-upgrade-20220801171441-13911 dockerd[10172]: time="2022-08-02T00:23:43.913887518Z" level=info msg="ignoring event" container=78f82b35ff9d4cdc35b5a16adb1d5d21beaa69c56681f2e2a7634f394f586831 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2db11f4be6ae3       586c112956dfc       9 seconds ago       Running             kube-controller-manager   2                   21a7fcc5a229e
	5189c93f61e73       d521dd763e2e3       9 seconds ago       Running             kube-apiserver            2                   79a720251ea9f
	714632dd3f668       aebe758cef4cd       9 seconds ago       Running             etcd                      2                   94fdae19f1eca
	6cd2f7c3674d9       3a5aa3a515f5d       9 seconds ago       Running             kube-scheduler            3                   55e3fac1ddf14
	78f82b35ff9d4       d521dd763e2e3       16 seconds ago      Exited              kube-apiserver            1                   d5b5e2b467f47
	5c33e1d6d3c17       3a5aa3a515f5d       16 seconds ago      Exited              kube-scheduler            2                   a2610649d6c83
	2f1242d99d50c       aebe758cef4cd       16 seconds ago      Exited              etcd                      1                   ce4989e9ba571
	5c376787dafea       586c112956dfc       19 seconds ago      Exited              kube-controller-manager   1                   625c16949e6db
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220801171441-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220801171441-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=kubernetes-upgrade-20220801171441-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_23_29_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:23:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220801171441-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:23:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:23:49 +0000   Tue, 02 Aug 2022 00:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:23:49 +0000   Tue, 02 Aug 2022 00:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:23:49 +0000   Tue, 02 Aug 2022 00:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Aug 2022 00:23:49 +0000   Tue, 02 Aug 2022 00:23:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-20220801171441-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                8c146f26-dfd0-4020-926d-55554baebc3f
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220801171441-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         26s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220801171441-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220801171441-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220801171441-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 25s              kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s              kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25s              kubelet  Node kubernetes-upgrade-20220801171441-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s              kubelet  Node kubernetes-upgrade-20220801171441-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s              kubelet  Node kubernetes-upgrade-20220801171441-13911 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             25s              kubelet  Node kubernetes-upgrade-20220801171441-13911 status is now: NodeNotReady
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x5 over 9s)  kubelet  Node kubernetes-upgrade-20220801171441-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 9s)  kubelet  Node kubernetes-upgrade-20220801171441-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x4 over 9s)  kubelet  Node kubernetes-upgrade-20220801171441-13911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001434] FS-Cache: O-key=[8] 'a336070400000000'
	[  +0.001106] FS-Cache: N-cookie c=000000006143e049 [p=000000003e51d12b fl=2 nc=0 na=1]
	[  +0.001736] FS-Cache: N-cookie d=00000000c02ab632 n=000000007370561b
	[  +0.001442] FS-Cache: N-key=[8] 'a336070400000000'
	[  +0.001754] FS-Cache: Duplicate cookie detected
	[  +0.001081] FS-Cache: O-cookie c=000000001fabbad4 [p=000000003e51d12b fl=226 nc=0 na=1]
	[  +0.001802] FS-Cache: O-cookie d=00000000c02ab632 n=000000000ab19c89
	[  +0.001454] FS-Cache: O-key=[8] 'a336070400000000'
	[  +0.001113] FS-Cache: N-cookie c=000000006143e049 [p=000000003e51d12b fl=2 nc=0 na=1]
	[  +0.001743] FS-Cache: N-cookie d=00000000c02ab632 n=00000000c182bdf9
	[  +0.001441] FS-Cache: N-key=[8] 'a336070400000000'
	[  +3.097702] FS-Cache: Duplicate cookie detected
	[  +0.001034] FS-Cache: O-cookie c=000000006d9e8dba [p=000000003e51d12b fl=226 nc=0 na=1]
	[  +0.001834] FS-Cache: O-cookie d=00000000c02ab632 n=00000000c6a910ed
	[  +0.001479] FS-Cache: O-key=[8] 'a236070400000000'
	[  +0.001160] FS-Cache: N-cookie c=00000000f4913f44 [p=000000003e51d12b fl=2 nc=0 na=1]
	[  +0.001828] FS-Cache: N-cookie d=00000000c02ab632 n=00000000c182bdf9
	[  +0.001431] FS-Cache: N-key=[8] 'a236070400000000'
	[  +0.448495] FS-Cache: Duplicate cookie detected
	[  +0.001028] FS-Cache: O-cookie c=00000000ea6aea4f [p=000000003e51d12b fl=226 nc=0 na=1]
	[  +0.001781] FS-Cache: O-cookie d=00000000c02ab632 n=00000000f2ac4d13
	[  +0.001449] FS-Cache: O-key=[8] 'aa36070400000000'
	[  +0.001099] FS-Cache: N-cookie c=0000000088a2da56 [p=000000003e51d12b fl=2 nc=0 na=1]
	[  +0.001742] FS-Cache: N-cookie d=00000000c02ab632 n=00000000f06bebfc
	[  +0.001427] FS-Cache: N-key=[8] 'aa36070400000000'
	
	* 
	* ==> etcd [2f1242d99d50] <==
	* {"level":"info","ts":"2022-08-02T00:23:38.344Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:23:38.344Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-08-02T00:23:38.344Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220801171441-13911 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:23:39.804Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:23:39.805Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-08-02T00:23:39.805Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:23:43.375Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-08-02T00:23:43.375Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-20220801171441-13911","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/08/02 00:23:43 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/08/02 00:23:43 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-08-02T00:23:43.386Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-08-02T00:23:43.388Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-08-02T00:23:43.390Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-08-02T00:23:43.390Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-20220801171441-13911","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [714632dd3f66] <==
	* {"level":"info","ts":"2022-08-02T00:23:46.008Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-08-02T00:23:46.060Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-08-02T00:23:46.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-08-02T00:23:46.061Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-08-02T00:23:46.061Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:23:46.061Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:23:46.064Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:23:46.064Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-08-02T00:23:46.064Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-08-02T00:23:46.064Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:23:46.064Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:23:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2022-08-02T00:23:47.705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-08-02T00:23:47.705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-08-02T00:23:47.705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2022-08-02T00:23:47.705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-08-02T00:23:47.705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2022-08-02T00:23:47.705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-08-02T00:23:47.707Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220801171441-13911 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:23:47.707Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:23:47.707Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:23:47.708Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:23:47.708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:23:47.708Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:23:47.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> kernel <==
	*  00:23:55 up 49 min,  0 users,  load average: 2.31, 1.33, 1.06
	Linux kubernetes-upgrade-20220801171441-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [5189c93f61e7] <==
	* I0802 00:23:49.431341       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0802 00:23:49.431405       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0802 00:23:49.438484       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0802 00:23:49.427158       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0802 00:23:49.442529       1 autoregister_controller.go:141] Starting autoregister controller
	I0802 00:23:49.442537       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0802 00:23:49.442691       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0802 00:23:49.442906       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0802 00:23:49.443020       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0802 00:23:49.486401       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0802 00:23:49.528236       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 00:23:49.528325       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 00:23:49.528364       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0802 00:23:49.532164       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0802 00:23:49.543077       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0802 00:23:49.543142       1 cache.go:39] Caches are synced for autoregister controller
	I0802 00:23:49.563216       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:23:49.582233       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:23:50.206436       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0802 00:23:50.430955       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 00:23:51.031399       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:23:51.036355       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:23:51.055998       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:23:51.065144       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 00:23:51.071013       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [78f82b35ff9d] <==
	* W0802 00:23:43.378771       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378786       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378801       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378842       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378860       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378902       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378920       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378934       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378949       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.378990       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.379159       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.379179       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.379226       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.379244       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.379263       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.379885       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.379945       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.380019       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.380045       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.380064       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.380086       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.380103       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.380120       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.380137       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:23:43.380454       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [2db11f4be6ae] <==
	* I0802 00:23:52.429473       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
	I0802 00:23:52.429636       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps
	I0802 00:23:52.429690       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0802 00:23:52.429785       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0802 00:23:52.430052       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0802 00:23:52.430308       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps
	I0802 00:23:52.430354       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0802 00:23:52.430413       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch
	I0802 00:23:52.430454       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0802 00:23:52.430907       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges
	I0802 00:23:52.431101       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0802 00:23:52.431136       1 controllermanager.go:593] Started "resourcequota"
	I0802 00:23:52.431256       1 resource_quota_controller.go:273] Starting resource quota controller
	I0802 00:23:52.431302       1 shared_informer.go:255] Waiting for caches to sync for resource quota
	I0802 00:23:52.431338       1 resource_quota_monitor.go:308] QuotaMonitor running
	I0802 00:23:52.572170       1 controllermanager.go:593] Started "serviceaccount"
	I0802 00:23:52.572221       1 serviceaccounts_controller.go:117] Starting service account controller
	I0802 00:23:52.572231       1 shared_informer.go:255] Waiting for caches to sync for service account
	I0802 00:23:52.720948       1 controllermanager.go:593] Started "replicaset"
	I0802 00:23:52.721025       1 replica_set.go:205] Starting replicaset controller
	I0802 00:23:52.721153       1 shared_informer.go:255] Waiting for caches to sync for ReplicaSet
	I0802 00:23:52.870554       1 controllermanager.go:593] Started "ttl"
	I0802 00:23:52.870596       1 ttl_controller.go:121] Starting TTL controller
	I0802 00:23:52.870605       1 shared_informer.go:255] Waiting for caches to sync for TTL
	I0802 00:23:52.921774       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [5c376787dafe] <==
	* I0802 00:23:36.269716       1 serving.go:348] Generated self-signed cert in-memory
	I0802 00:23:36.604854       1 controllermanager.go:180] Version: v1.24.3
	I0802 00:23:36.604897       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:23:36.605732       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0802 00:23:36.605904       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 00:23:36.605919       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0802 00:23:36.605986       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-scheduler [5c33e1d6d3c1] <==
	* I0802 00:23:39.298675       1 serving.go:348] Generated self-signed cert in-memory
	W0802 00:23:41.710310       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 00:23:41.710346       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:23:41.710353       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 00:23:41.710358       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 00:23:41.770441       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0802 00:23:41.770477       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:23:41.772753       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 00:23:41.772879       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:23:41.772931       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0802 00:23:41.773404       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 00:23:41.873819       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:23:43.365569       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0802 00:23:43.366045       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 00:23:43.366327       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [6cd2f7c3674d] <==
	* I0802 00:23:46.321433       1 serving.go:348] Generated self-signed cert in-memory
	W0802 00:23:49.469053       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 00:23:49.469090       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:23:49.469146       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 00:23:49.469153       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 00:23:49.477774       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0802 00:23:49.477829       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:23:49.479163       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 00:23:49.479196       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:23:49.479531       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0802 00:23:49.479595       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 00:23:49.580184       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:18:59 UTC, end at Tue 2022-08-02 00:23:56 UTC. --
	Aug 02 00:23:47 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:47.390251   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:47 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:47.491379   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:47 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:47.592252   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:47 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:47.692938   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:47 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:47.793706   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:47 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:47.894706   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:47 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:47.995413   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.096210   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.197389   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.297655   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.398952   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.499162   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.599656   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.700645   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.801018   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:48 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:48.902181   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:49 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:49.003217   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:49 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:49.104018   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:49 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:49.205014   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:49 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:49.306162   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:49 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: E0802 00:23:49.406428   11583 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220801171441-13911\" not found"
	Aug 02 00:23:49 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: I0802 00:23:49.551572   11583 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220801171441-13911"
	Aug 02 00:23:49 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: I0802 00:23:49.551670   11583 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220801171441-13911"
	Aug 02 00:23:50 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: I0802 00:23:50.064050   11583 apiserver.go:52] "Watching apiserver"
	Aug 02 00:23:50 kubernetes-upgrade-20220801171441-13911 kubelet[11583]: I0802 00:23:50.113632   11583 reconciler.go:157] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220801171441-13911 -n kubernetes-upgrade-20220801171441-13911
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220801171441-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220801171441-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.784417356s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220801171441-13911 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220801171441-13911 describe pod storage-provisioner: exit status 1 (52.946725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220801171441-13911 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220801171441-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220801171441-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220801171441-13911: (3.354671155s)
--- FAIL: TestKubernetesUpgrade (560.73s)

                                                
                                    
x
+
TestMissingContainerUpgrade (50.09s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.4271357095.exe start -p missing-upgrade-20220801171351-13911 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.4271357095.exe start -p missing-upgrade-20220801171351-13911 --memory=2200 --driver=docker : exit status 78 (35.88059177s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220801171351-13911] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220801171351-13911
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220801171351-13911" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.27 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 55.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 76.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 121.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 186.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 230.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 273.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 313.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 332.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 354.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 377.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 399.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 421.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 487.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 525.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 539.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:14:08.805935808 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220801171351-13911" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:14:26.353825058 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.4271357095.exe start -p missing-upgrade-20220801171351-13911 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.4271357095.exe start -p missing-upgrade-20220801171351-13911 --memory=2200 --driver=docker : exit status 70 (4.214268989s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220801171351-13911] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220801171351-13911
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220801171351-13911" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.4271357095.exe start -p missing-upgrade-20220801171351-13911 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.4271357095.exe start -p missing-upgrade-20220801171351-13911 --memory=2200 --driver=docker : exit status 70 (4.277743397s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220801171351-13911] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220801171351-13911
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220801171351-13911" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-08-01 17:14:38.994947 -0700 PDT m=+2442.357857216
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220801171351-13911
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220801171351-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42cdbaae06baae9bb14332c4d56238b144f18a25fd2aa779375d4e07bb09ef1a",
	        "Created": "2022-08-02T00:14:17.016286474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 142743,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:14:17.279452729Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/42cdbaae06baae9bb14332c4d56238b144f18a25fd2aa779375d4e07bb09ef1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42cdbaae06baae9bb14332c4d56238b144f18a25fd2aa779375d4e07bb09ef1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/42cdbaae06baae9bb14332c4d56238b144f18a25fd2aa779375d4e07bb09ef1a/hosts",
	        "LogPath": "/var/lib/docker/containers/42cdbaae06baae9bb14332c4d56238b144f18a25fd2aa779375d4e07bb09ef1a/42cdbaae06baae9bb14332c4d56238b144f18a25fd2aa779375d4e07bb09ef1a-json.log",
	        "Name": "/missing-upgrade-20220801171351-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220801171351-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ae60fe73999284255b853e3ad991682112f8f61209a201bfd68173a80d7f2ff8-init/diff:/var/lib/docker/overlay2/0400cad1b1313cb67e65ebe434b7dd2b29211488ca1be949b79ab4d6a79eb083/diff:/var/lib/docker/overlay2/ed57a0f5f3e1a9318836ec67c8928bcd3c5cb6dc101c50ea25c3dbe9f66b420b/diff:/var/lib/docker/overlay2/d7e41b730acd6ed99af00219d5b49e285810e9ee8617372d4281ac10e21c25e4/diff:/var/lib/docker/overlay2/dc52ef1dcecdf09e164c1863b8dd957e76c92af279dca514512910a777c5ca02/diff:/var/lib/docker/overlay2/cc5049b75b18bac615647f9185e16a39d5a4284077872e5ee4d92dc5a201dad2/diff:/var/lib/docker/overlay2/2566f17d12919bb1dbec910b8ad2bd988a5969b0f7790994fa7ae09b6921dd1b/diff:/var/lib/docker/overlay2/eeb11926bcaf873915458588cc325813b67bc777447d971da22180f1e3faf30c/diff:/var/lib/docker/overlay2/9d42d7c19475b99aa2669f464549b9a142ae2a0ff9a246164abe50e634e98e42/diff:/var/lib/docker/overlay2/5f303196a99ad4a9cae12fb0d21eb8b720994e95533de360b3547dcd7196f01f/diff:/var/lib/docker/overlay2/0ae627
cf2b88ab743a72e1cdd36956b6ac9f3997fae85c34d5713dad9f01dc84/diff:/var/lib/docker/overlay2/e058fad03b36217915773f8ee0df03b8bce92d9a4ead373f8240d8d771572bca/diff:/var/lib/docker/overlay2/6943f35823dec04a8285e8caebd892e09fac68a639bbbacd138e37fd68f0129a/diff:/var/lib/docker/overlay2/d0cc6ebebf4926de68319cedd869e1fc445bf1d364b3b0e35c1e830fe0fe48b4/diff:/var/lib/docker/overlay2/4472e24cfebff93d1e85b6e4d68ff625173c0e3152679abc20700fc92a14b1d1/diff:/var/lib/docker/overlay2/0e6a6441f8d09a9b42dc66b0c1b96324b926db60b70f4887003265eb438ac79d/diff:/var/lib/docker/overlay2/96d290e13d0c5ed9e67442baa879e92e1cdc28880b1d383e731225f02d8f07cd/diff:/var/lib/docker/overlay2/289ef8b1cad82c3009a902132283b644e1498ffcfeadcb259a4a204a83cf3cfd/diff:/var/lib/docker/overlay2/a088d2ff3331391b344eb7c1c616e95b1b8f68c5eaae24166ed26e85752c0464/diff:/var/lib/docker/overlay2/7baccffb45621ad4622b3a2c014a57d4ce16dda8dc7b6f3f11d9821cb964e5aa/diff:/var/lib/docker/overlay2/6cf270cd2e69e14e024959ad818ca7a94272885dc5bbf442baa824ecce417692/diff:/var/lib/d
ocker/overlay2/b2c09f536dfd40bc8116f84562c044148380c7873818bdd91cd50876633f28cd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae60fe73999284255b853e3ad991682112f8f61209a201bfd68173a80d7f2ff8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae60fe73999284255b853e3ad991682112f8f61209a201bfd68173a80d7f2ff8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae60fe73999284255b853e3ad991682112f8f61209a201bfd68173a80d7f2ff8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220801171351-13911",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220801171351-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220801171351-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220801171351-13911",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220801171351-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "273809c38de0ebe186cb98c923bea245b84c4534bc5ae02b4941fe01e9e3cbe5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63102"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/273809c38de0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "1a76b290db54442b875f3e89264d9ade71568cba39a394161506a9d06c52ce89",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "36a75cfd26e0b119898f9567916858c4590125414a58496657a7667bf2804204",
	                    "EndpointID": "1a76b290db54442b875f3e89264d9ade71568cba39a394161506a9d06c52ce89",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220801171351-13911 -n missing-upgrade-20220801171351-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220801171351-13911 -n missing-upgrade-20220801171351-13911: exit status 6 (428.328424ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:14:39.484901   25183 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220801171351-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220801171351-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220801171351-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220801171351-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220801171351-13911: (2.45540804s)
--- FAIL: TestMissingContainerUpgrade (50.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2094976292.exe start -p stopped-upgrade-20220801171600-13911 --memory=2200 --vm-driver=docker 
E0801 17:16:33.897949   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2094976292.exe start -p stopped-upgrade-20220801171600-13911 --memory=2200 --vm-driver=docker : exit status 70 (34.837411184s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220801171600-13911] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1586207582
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:16:17.117848505 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220801171600-13911" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:16:33.656849503 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220801171600-13911", then "minikube start -p stopped-upgrade-20220801171600-13911 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.76 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.39 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 147.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 191.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 253.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 273.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 382.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 485.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 524.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:16:33.656849503 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2094976292.exe start -p stopped-upgrade-20220801171600-13911 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2094976292.exe start -p stopped-upgrade-20220801171600-13911 --memory=2200 --vm-driver=docker : exit status 70 (4.606218117s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220801171600-13911] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig2664972632
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220801171600-13911" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2094976292.exe start -p stopped-upgrade-20220801171600-13911 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2094976292.exe start -p stopped-upgrade-20220801171600-13911 --memory=2200 --vm-driver=docker : exit status 70 (4.468280519s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220801171600-13911] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig596509473
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220801171600-13911" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (46.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (61.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220801171654-13911 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220801171654-13911 --output=json --layout=cluster: exit status 2 (16.09750258s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220801171654-13911","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220801171654-13911","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220801171654-13911
helpers_test.go:235: (dbg) docker inspect pause-20220801171654-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "315980173db3fece54bdc3dc8a8f0e35bc4f4daccc7139d0821aa01821ac91d9",
	        "Created": "2022-08-02T00:17:00.760668519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 153344,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:17:01.054652445Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/315980173db3fece54bdc3dc8a8f0e35bc4f4daccc7139d0821aa01821ac91d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/315980173db3fece54bdc3dc8a8f0e35bc4f4daccc7139d0821aa01821ac91d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/315980173db3fece54bdc3dc8a8f0e35bc4f4daccc7139d0821aa01821ac91d9/hosts",
	        "LogPath": "/var/lib/docker/containers/315980173db3fece54bdc3dc8a8f0e35bc4f4daccc7139d0821aa01821ac91d9/315980173db3fece54bdc3dc8a8f0e35bc4f4daccc7139d0821aa01821ac91d9-json.log",
	        "Name": "/pause-20220801171654-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20220801171654-13911:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220801171654-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f83a1c0a8e5b25663009e97c1701c19f4031593b58e344e0a32f767b69bbe0d9-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f83a1c0a8e5b25663009e97c1701c19f4031593b58e344e0a32f767b69bbe0d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f83a1c0a8e5b25663009e97c1701c19f4031593b58e344e0a32f767b69bbe0d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f83a1c0a8e5b25663009e97c1701c19f4031593b58e344e0a32f767b69bbe0d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20220801171654-13911",
	                "Source": "/var/lib/docker/volumes/pause-20220801171654-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220801171654-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220801171654-13911",
	                "name.minikube.sigs.k8s.io": "pause-20220801171654-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dcbbeb8fe653b6d05e58eaa56a37c6c2c2dfe66d518fd5eaecb85e09bb0e405d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63804"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63806"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dcbbeb8fe653",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220801171654-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "315980173db3",
	                        "pause-20220801171654-13911"
	                    ],
	                    "NetworkID": "86262c53e00b92bf3cc7c53a858bd999eeefceca484321084bf5cf43cb3a7d8f",
	                    "EndpointID": "074d350d8894d4201201eb775b9bca5e6e16740a82f63347b6022482c8c67b52",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220801171654-13911 -n pause-20220801171654-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220801171654-13911 -n pause-20220801171654-13911: exit status 2 (16.101717282s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220801171654-13911 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220801171654-13911 logs -n 25: (13.083866291s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                      | force-systemd-env-20220801171104-13911  | jenkins | v1.26.0 | 01 Aug 22 17:11 PDT | 01 Aug 22 17:11 PDT |
	|         | force-systemd-env-20220801171104-13911  |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5    |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | offline-docker-20220801171037-13911     | jenkins | v1.26.0 | 01 Aug 22 17:11 PDT | 01 Aug 22 17:11 PDT |
	|         | offline-docker-20220801171037-13911     |                                         |         |         |                     |                     |
	| start   | -p                                      | force-systemd-flag-20220801171127-13911 | jenkins | v1.26.0 | 01 Aug 22 17:11 PDT | 01 Aug 22 17:11 PDT |
	|         | force-systemd-flag-20220801171127-13911 |                                         |         |         |                     |                     |
	|         | --memory=2048 --force-systemd           |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker  |                                         |         |         |                     |                     |
	| ssh     | force-systemd-env-20220801171104-13911  | force-systemd-env-20220801171104-13911  | jenkins | v1.26.0 | 01 Aug 22 17:11 PDT | 01 Aug 22 17:11 PDT |
	|         | ssh docker info --format                |                                         |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                     |                     |
	| delete  | -p                                      | force-systemd-env-20220801171104-13911  | jenkins | v1.26.0 | 01 Aug 22 17:11 PDT | 01 Aug 22 17:11 PDT |
	|         | force-systemd-env-20220801171104-13911  |                                         |         |         |                     |                     |
	| start   | -p                                      | docker-flags-20220801171136-13911       | jenkins | v1.26.0 | 01 Aug 22 17:11 PDT | 01 Aug 22 17:12 PDT |
	|         | docker-flags-20220801171136-13911       |                                         |         |         |                     |                     |
	|         | --cache-images=false                    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=false --docker-env=FOO=BAR       |                                         |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                    |                                         |         |         |                     |                     |
	|         | --docker-opt=debug                      |                                         |         |         |                     |                     |
	|         | --docker-opt=icc=true                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | force-systemd-flag-20220801171127-13911 | force-systemd-flag-20220801171127-13911 | jenkins | v1.26.0 | 01 Aug 22 17:11 PDT | 01 Aug 22 17:11 PDT |
	|         | ssh docker info --format                |                                         |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                     |                     |
	| delete  | -p                                      | force-systemd-flag-20220801171127-13911 | jenkins | v1.26.0 | 01 Aug 22 17:11 PDT | 01 Aug 22 17:12 PDT |
	|         | force-systemd-flag-20220801171127-13911 |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220801171201-13911    | jenkins | v1.26.0 | 01 Aug 22 17:12 PDT | 01 Aug 22 17:12 PDT |
	|         | cert-expiration-20220801171201-13911    |                                         |         |         |                     |                     |
	|         | --memory=2048 --cert-expiration=3m      |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220801171136-13911       | docker-flags-20220801171136-13911       | jenkins | v1.26.0 | 01 Aug 22 17:12 PDT | 01 Aug 22 17:12 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=Environment --no-pager       |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220801171136-13911       | docker-flags-20220801171136-13911       | jenkins | v1.26.0 | 01 Aug 22 17:12 PDT | 01 Aug 22 17:12 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=ExecStart --no-pager         |                                         |         |         |                     |                     |
	| delete  | -p                                      | docker-flags-20220801171136-13911       | jenkins | v1.26.0 | 01 Aug 22 17:12 PDT | 01 Aug 22 17:12 PDT |
	|         | docker-flags-20220801171136-13911       |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-options-20220801171209-13911       | jenkins | v1.26.0 | 01 Aug 22 17:12 PDT | 01 Aug 22 17:12 PDT |
	|         | cert-options-20220801171209-13911       |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |         |                     |                     |
	|         | --apiserver-names=localhost             |                                         |         |         |                     |                     |
	|         | --apiserver-names=www.google.com        |                                         |         |         |                     |                     |
	|         | --apiserver-port=8555                   |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|         | --apiserver-name=localhost              |                                         |         |         |                     |                     |
	| ssh     | cert-options-20220801171209-13911       | cert-options-20220801171209-13911       | jenkins | v1.26.0 | 01 Aug 22 17:12 PDT | 01 Aug 22 17:12 PDT |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |         |                     |                     |
	| ssh     | -p                                      | cert-options-20220801171209-13911       | jenkins | v1.26.0 | 01 Aug 22 17:12 PDT | 01 Aug 22 17:12 PDT |
	|         | cert-options-20220801171209-13911       |                                         |         |         |                     |                     |
	|         | -- sudo cat                             |                                         |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-options-20220801171209-13911       | jenkins | v1.26.0 | 01 Aug 22 17:12 PDT | 01 Aug 22 17:12 PDT |
	|         | cert-options-20220801171209-13911       |                                         |         |         |                     |                     |
	| delete  | -p                                      | running-upgrade-20220801171242-13911    | jenkins | v1.26.0 | 01 Aug 22 17:13 PDT | 01 Aug 22 17:13 PDT |
	|         | running-upgrade-20220801171242-13911    |                                         |         |         |                     |                     |
	| delete  | -p                                      | missing-upgrade-20220801171351-13911    | jenkins | v1.26.0 | 01 Aug 22 17:14 PDT | 01 Aug 22 17:14 PDT |
	|         | missing-upgrade-20220801171351-13911    |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220801171441-13911 | jenkins | v1.26.0 | 01 Aug 22 17:14 PDT |                     |
	|         | kubernetes-upgrade-20220801171441-13911 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220801171201-13911    | jenkins | v1.26.0 | 01 Aug 22 17:15 PDT | 01 Aug 22 17:15 PDT |
	|         | cert-expiration-20220801171201-13911    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-expiration-20220801171201-13911    | jenkins | v1.26.0 | 01 Aug 22 17:15 PDT | 01 Aug 22 17:16 PDT |
	|         | cert-expiration-20220801171201-13911    |                                         |         |         |                     |                     |
	| delete  | -p                                      | stopped-upgrade-20220801171600-13911    | jenkins | v1.26.0 | 01 Aug 22 17:16 PDT | 01 Aug 22 17:16 PDT |
	|         | stopped-upgrade-20220801171600-13911    |                                         |         |         |                     |                     |
	| start   | -p pause-20220801171654-13911           | pause-20220801171654-13911              | jenkins | v1.26.0 | 01 Aug 22 17:16 PDT | 01 Aug 22 17:17 PDT |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=all --driver=docker              |                                         |         |         |                     |                     |
	| start   | -p pause-20220801171654-13911           | pause-20220801171654-13911              | jenkins | v1.26.0 | 01 Aug 22 17:17 PDT | 01 Aug 22 17:18 PDT |
	|         | --alsologtostderr -v=1                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| pause   | -p pause-20220801171654-13911           | pause-20220801171654-13911              | jenkins | v1.26.0 | 01 Aug 22 17:18 PDT | 01 Aug 22 17:18 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:17:38
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:17:38.790264   25967 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:17:38.790477   25967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:17:38.790483   25967 out.go:309] Setting ErrFile to fd 2...
	I0801 17:17:38.790486   25967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:17:38.790602   25967 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:17:38.791066   25967 out.go:303] Setting JSON to false
	I0801 17:17:38.806603   25967 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8229,"bootTime":1659391229,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:17:38.806739   25967 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:17:38.828631   25967 out.go:177] * [pause-20220801171654-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:17:38.870705   25967 notify.go:193] Checking for updates...
	I0801 17:17:38.892599   25967 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:17:38.913763   25967 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:17:38.955820   25967 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:17:38.976850   25967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:17:38.998485   25967 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:17:39.020186   25967 config.go:180] Loaded profile config "pause-20220801171654-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:17:39.020645   25967 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:17:39.090654   25967 docker.go:137] docker version: linux-20.10.17
	I0801 17:17:39.090819   25967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:17:39.230647   25967 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:56 SystemTime:2022-08-02 00:17:39.155470117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:17:39.253208   25967 out.go:177] * Using the docker driver based on existing profile
	I0801 17:17:39.295271   25967 start.go:284] selected driver: docker
	I0801 17:17:39.295300   25967 start.go:808] validating driver "docker" against &{Name:pause-20220801171654-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220801171654-13911 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:17:39.295423   25967 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:17:39.295592   25967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:17:39.430743   25967 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:56 SystemTime:2022-08-02 00:17:39.356080855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:17:39.432838   25967 cni.go:95] Creating CNI manager for ""
	I0801 17:17:39.432859   25967 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:17:39.432874   25967 start_flags.go:310] config:
	{Name:pause-20220801171654-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220801171654-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:17:39.476351   25967 out.go:177] * Starting control plane node pause-20220801171654-13911 in cluster pause-20220801171654-13911
	I0801 17:17:39.497623   25967 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:17:39.518475   25967 out.go:177] * Pulling base image ...
	I0801 17:17:39.560612   25967 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:17:39.560627   25967 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:17:39.560700   25967 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 17:17:39.560731   25967 cache.go:57] Caching tarball of preloaded images
	I0801 17:17:39.561542   25967 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:17:39.561736   25967 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0801 17:17:39.562119   25967 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/config.json ...
	I0801 17:17:39.626014   25967 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:17:39.626029   25967 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:17:39.626039   25967 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:17:39.626122   25967 start.go:371] acquiring machines lock for pause-20220801171654-13911: {Name:mkab6f62625094c5d71c7e6508d1f7740cc4193b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:17:39.626199   25967 start.go:375] acquired machines lock for "pause-20220801171654-13911" in 57.75µs
	I0801 17:17:39.626221   25967 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:17:39.626229   25967 fix.go:55] fixHost starting: 
	I0801 17:17:39.626476   25967 cli_runner.go:164] Run: docker container inspect pause-20220801171654-13911 --format={{.State.Status}}
	I0801 17:17:39.697312   25967 fix.go:103] recreateIfNeeded on pause-20220801171654-13911: state=Running err=<nil>
	W0801 17:17:39.697353   25967 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:17:39.719317   25967 out.go:177] * Updating the running docker "pause-20220801171654-13911" container ...
	I0801 17:17:39.740657   25967 machine.go:88] provisioning docker machine ...
	I0801 17:17:39.740740   25967 ubuntu.go:169] provisioning hostname "pause-20220801171654-13911"
	I0801 17:17:39.740897   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:39.813318   25967 main.go:134] libmachine: Using SSH client type: native
	I0801 17:17:39.813517   25967 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63802 <nil> <nil>}
	I0801 17:17:39.813534   25967 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220801171654-13911 && echo "pause-20220801171654-13911" | sudo tee /etc/hostname
	I0801 17:17:39.932463   25967 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220801171654-13911
	
	I0801 17:17:39.932561   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:40.005256   25967 main.go:134] libmachine: Using SSH client type: native
	I0801 17:17:40.005426   25967 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63802 <nil> <nil>}
	I0801 17:17:40.005441   25967 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220801171654-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220801171654-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220801171654-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:17:40.120348   25967 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:17:40.120380   25967 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:17:40.120399   25967 ubuntu.go:177] setting up certificates
	I0801 17:17:40.120414   25967 provision.go:83] configureAuth start
	I0801 17:17:40.120485   25967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220801171654-13911
	I0801 17:17:40.191145   25967 provision.go:138] copyHostCerts
	I0801 17:17:40.191222   25967 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:17:40.191231   25967 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:17:40.191328   25967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:17:40.191533   25967 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:17:40.191543   25967 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:17:40.191607   25967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:17:40.191751   25967 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:17:40.191757   25967 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:17:40.191811   25967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:17:40.191930   25967 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.pause-20220801171654-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220801171654-13911]
	I0801 17:17:40.382012   25967 provision.go:172] copyRemoteCerts
	I0801 17:17:40.382072   25967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:17:40.382115   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:40.455196   25967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63802 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/pause-20220801171654-13911/id_rsa Username:docker}
	I0801 17:17:40.538109   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:17:40.554333   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0801 17:17:40.570674   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 17:17:40.587975   25967 provision.go:86] duration metric: configureAuth took 467.543415ms
	I0801 17:17:40.587988   25967 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:17:40.588132   25967 config.go:180] Loaded profile config "pause-20220801171654-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:17:40.588195   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:40.660249   25967 main.go:134] libmachine: Using SSH client type: native
	I0801 17:17:40.660431   25967 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63802 <nil> <nil>}
	I0801 17:17:40.660441   25967 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:17:40.772763   25967 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:17:40.772778   25967 ubuntu.go:71] root file system type: overlay
	I0801 17:17:40.772908   25967 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:17:40.772975   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:40.844324   25967 main.go:134] libmachine: Using SSH client type: native
	I0801 17:17:40.844472   25967 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63802 <nil> <nil>}
	I0801 17:17:40.844533   25967 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:17:40.966344   25967 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:17:40.966421   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:41.037180   25967 main.go:134] libmachine: Using SSH client type: native
	I0801 17:17:41.037336   25967 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63802 <nil> <nil>}
	I0801 17:17:41.037350   25967 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:17:41.154641   25967 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:17:41.154654   25967 machine.go:91] provisioned docker machine in 1.413935909s
	I0801 17:17:41.154663   25967 start.go:307] post-start starting for "pause-20220801171654-13911" (driver="docker")
	I0801 17:17:41.154668   25967 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:17:41.154726   25967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:17:41.154772   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:41.225491   25967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63802 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/pause-20220801171654-13911/id_rsa Username:docker}
	I0801 17:17:41.310889   25967 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:17:41.314430   25967 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:17:41.314447   25967 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:17:41.314460   25967 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:17:41.314464   25967 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:17:41.314471   25967 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:17:41.314576   25967 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:17:41.314720   25967 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:17:41.314862   25967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:17:41.322039   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:17:41.340672   25967 start.go:310] post-start completed in 185.998049ms
	I0801 17:17:41.340749   25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:17:41.340808   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:41.413103   25967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63802 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/pause-20220801171654-13911/id_rsa Username:docker}
	I0801 17:17:41.495745   25967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:17:41.500309   25967 fix.go:57] fixHost completed within 1.874056989s
	I0801 17:17:41.500324   25967 start.go:82] releasing machines lock for "pause-20220801171654-13911", held for 1.874095352s
	I0801 17:17:41.500391   25967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220801171654-13911
	I0801 17:17:41.571758   25967 ssh_runner.go:195] Run: systemctl --version
	I0801 17:17:41.571776   25967 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:17:41.571826   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:41.571848   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:41.648188   25967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63802 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/pause-20220801171654-13911/id_rsa Username:docker}
	I0801 17:17:41.649665   25967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63802 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/pause-20220801171654-13911/id_rsa Username:docker}
	I0801 17:17:41.922742   25967 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:17:41.932605   25967 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:17:41.932657   25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:17:41.944469   25967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:17:41.958547   25967 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:17:42.058868   25967 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:17:42.156463   25967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:17:42.246939   25967 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:17:58.146720   25967 ssh_runner.go:235] Completed: sudo systemctl restart docker: (15.899571774s)
	I0801 17:17:58.146785   25967 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:17:58.296164   25967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:17:58.422995   25967 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:17:58.442438   25967 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:17:58.442520   25967 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:17:58.448796   25967 start.go:471] Will wait 60s for crictl version
	I0801 17:17:58.448859   25967 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:17:58.513960   25967 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:17:58.514043   25967 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:17:58.606162   25967 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:17:58.756796   25967 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:17:58.756995   25967 cli_runner.go:164] Run: docker exec -t pause-20220801171654-13911 dig +short host.docker.internal
	I0801 17:17:58.897133   25967 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:17:58.897233   25967 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:17:58.903622   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:17:58.979618   25967 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:17:58.979702   25967 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:17:59.020043   25967 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0801 17:17:59.020064   25967 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:17:59.020130   25967 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:17:59.108751   25967 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0801 17:17:59.108777   25967 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:17:59.108845   25967 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:17:59.295150   25967 cni.go:95] Creating CNI manager for ""
	I0801 17:17:59.295163   25967 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:17:59.295176   25967 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:17:59.295203   25967 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220801171654-13911 NodeName:pause-20220801171654-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:17:59.295312   25967 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-20220801171654-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:17:59.295418   25967 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220801171654-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:pause-20220801171654-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:17:59.295491   25967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:17:59.308017   25967 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:17:59.308093   25967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:17:59.323842   25967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0801 17:17:59.399005   25967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:17:59.477889   25967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0801 17:17:59.495413   25967 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:17:59.500693   25967 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911 for IP: 192.168.67.2
	I0801 17:17:59.500811   25967 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:17:59.500864   25967 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:17:59.500952   25967 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/client.key
	I0801 17:17:59.501009   25967 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/apiserver.key.c7fa3a9e
	I0801 17:17:59.501063   25967 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/proxy-client.key
	I0801 17:17:59.501325   25967 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:17:59.501368   25967 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:17:59.501380   25967 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:17:59.501416   25967 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:17:59.501447   25967 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:17:59.501477   25967 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:17:59.501543   25967 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:17:59.502068   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:17:59.586060   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:17:59.610310   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:17:59.689247   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 17:17:59.715641   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:17:59.790918   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:17:59.876232   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:17:59.901578   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:17:59.977360   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:18:00.005719   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:18:00.083521   25967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:18:00.108577   25967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:18:00.173470   25967 ssh_runner.go:195] Run: openssl version
	I0801 17:18:00.178874   25967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:18:00.186944   25967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:18:00.191300   25967 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:18:00.191355   25967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:18:00.199045   25967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:18:00.206739   25967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:18:00.215286   25967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:18:00.220468   25967 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:18:00.220522   25967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:18:00.226362   25967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:18:00.235137   25967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:18:00.275211   25967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:18:00.279641   25967 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:18:00.279686   25967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:18:00.290704   25967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:18:00.298424   25967 kubeadm.go:395] StartCluster: {Name:pause-20220801171654-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220801171654-13911 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:18:00.298523   25967 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:18:00.331377   25967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:18:00.386767   25967 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:18:00.386787   25967 kubeadm.go:626] restartCluster start
	I0801 17:18:00.386857   25967 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:18:00.394262   25967 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:18:00.394327   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:18:00.470875   25967 kubeconfig.go:92] found "pause-20220801171654-13911" server: "https://127.0.0.1:63806"
	I0801 17:18:00.471321   25967 kapi.go:59] client config for pause-20220801171654-13911: &rest.Config{Host:"https://127.0.0.1:63806", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22ff6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0801 17:18:00.471884   25967 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:18:00.479781   25967 api_server.go:165] Checking apiserver status ...
	I0801 17:18:00.479842   25967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:18:00.489031   25967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4656/cgroup
	W0801 17:18:00.496512   25967 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4656/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:18:00.496525   25967 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63806/healthz ...
	I0801 17:18:02.801097   25967 api_server.go:266] https://127.0.0.1:63806/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:18:02.801124   25967 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:63806/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:18:03.064316   25967 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63806/healthz ...
	I0801 17:18:03.070597   25967 api_server.go:266] https://127.0.0.1:63806/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:18:03.070617   25967 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:63806/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:18:03.452065   25967 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63806/healthz ...
	I0801 17:18:03.458281   25967 api_server.go:266] https://127.0.0.1:63806/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:18:03.458299   25967 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:63806/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:18:03.881165   25967 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63806/healthz ...
	I0801 17:18:03.887595   25967 api_server.go:266] https://127.0.0.1:63806/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:18:03.887616   25967 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:63806/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:18:04.362305   25967 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63806/healthz ...
	I0801 17:18:04.369780   25967 api_server.go:266] https://127.0.0.1:63806/healthz returned 200:
	ok
	I0801 17:18:04.381436   25967 system_pods.go:86] 6 kube-system pods found
	I0801 17:18:04.381454   25967 system_pods.go:89] "coredns-6d4b75cb6d-2p796" [03919d6b-ad6e-4a7a-aad1-936670c554a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0801 17:18:04.381463   25967 system_pods.go:89] "etcd-pause-20220801171654-13911" [734057de-416b-4bfe-844b-86504ccc8d95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0801 17:18:04.381470   25967 system_pods.go:89] "kube-apiserver-pause-20220801171654-13911" [57bcb747-7eb4-448c-8803-b08d64586c96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0801 17:18:04.381477   25967 system_pods.go:89] "kube-controller-manager-pause-20220801171654-13911" [c6e08221-0002-4265-b33c-8abd1680f144] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0801 17:18:04.381482   25967 system_pods.go:89] "kube-proxy-7pzw6" [2cd2e4db-aabc-4373-a763-2e1075cd0b06] Running
	I0801 17:18:04.381490   25967 system_pods.go:89] "kube-scheduler-pause-20220801171654-13911" [7a7c39d0-9334-4264-a5a6-c69765358507] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:18:04.382625   25967 api_server.go:140] control plane version: v1.24.3
	I0801 17:18:04.382638   25967 kubeadm.go:620] The running cluster does not require reconfiguration: 127.0.0.1
	I0801 17:18:04.382649   25967 kubeadm.go:674] Taking a shortcut, as the cluster seems to be properly configured
	I0801 17:18:04.382654   25967 kubeadm.go:630] restartCluster took 3.995817244s
	I0801 17:18:04.382660   25967 kubeadm.go:397] StartCluster complete in 4.08419576s
	I0801 17:18:04.382670   25967 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:18:04.382738   25967 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:18:04.383171   25967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:18:04.383915   25967 kapi.go:59] client config for pause-20220801171654-13911: &rest.Config{Host:"https://127.0.0.1:63806", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22ff6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0801 17:18:04.386239   25967 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220801171654-13911" rescaled to 1
	I0801 17:18:04.386274   25967 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:18:04.386288   25967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:18:04.386336   25967 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0801 17:18:04.386446   25967 config.go:180] Loaded profile config "pause-20220801171654-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:18:04.459477   25967 out.go:177] * Verifying Kubernetes components...
	I0801 17:18:04.459572   25967 addons.go:65] Setting storage-provisioner=true in profile "pause-20220801171654-13911"
	I0801 17:18:04.459574   25967 addons.go:65] Setting default-storageclass=true in profile "pause-20220801171654-13911"
	I0801 17:18:04.480654   25967 addons.go:153] Setting addon storage-provisioner=true in "pause-20220801171654-13911"
	I0801 17:18:04.480660   25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:18:04.466712   25967 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0801 17:18:04.480665   25967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220801171654-13911"
	W0801 17:18:04.480670   25967 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:18:04.480734   25967 host.go:66] Checking if "pause-20220801171654-13911" exists ...
	I0801 17:18:04.481003   25967 cli_runner.go:164] Run: docker container inspect pause-20220801171654-13911 --format={{.State.Status}}
	I0801 17:18:04.481156   25967 cli_runner.go:164] Run: docker container inspect pause-20220801171654-13911 --format={{.State.Status}}
	I0801 17:18:04.501422   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:18:04.561650   25967 kapi.go:59] client config for pause-20220801171654-13911: &rest.Config{Host:"https://127.0.0.1:63806", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/pause-20220801171654-13911/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22ff6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0801 17:18:04.565082   25967 addons.go:153] Setting addon default-storageclass=true in "pause-20220801171654-13911"
	W0801 17:18:04.585331   25967 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:18:04.585313   25967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:18:04.585358   25967 host.go:66] Checking if "pause-20220801171654-13911" exists ...
	I0801 17:18:04.605929   25967 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:18:04.605939   25967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:18:04.605995   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:18:04.606690   25967 cli_runner.go:164] Run: docker container inspect pause-20220801171654-13911 --format={{.State.Status}}
	I0801 17:18:04.614528   25967 node_ready.go:35] waiting up to 6m0s for node "pause-20220801171654-13911" to be "Ready" ...
	I0801 17:18:04.617689   25967 node_ready.go:49] node "pause-20220801171654-13911" has status "Ready":"True"
	I0801 17:18:04.617699   25967 node_ready.go:38] duration metric: took 3.139064ms waiting for node "pause-20220801171654-13911" to be "Ready" ...
	I0801 17:18:04.617707   25967 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:18:04.622376   25967 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-2p796" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:04.685955   25967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63802 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/pause-20220801171654-13911/id_rsa Username:docker}
	I0801 17:18:04.687694   25967 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:18:04.687704   25967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:18:04.687758   25967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220801171654-13911
	I0801 17:18:04.759109   25967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63802 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/pause-20220801171654-13911/id_rsa Username:docker}
	I0801 17:18:04.774785   25967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:18:04.850541   25967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:18:05.133459   25967 pod_ready.go:92] pod "coredns-6d4b75cb6d-2p796" in "kube-system" namespace has status "Ready":"True"
	I0801 17:18:05.133474   25967 pod_ready.go:81] duration metric: took 511.077137ms waiting for pod "coredns-6d4b75cb6d-2p796" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:05.133480   25967 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220801171654-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:05.385792   25967 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0801 17:18:05.459656   25967 addons.go:414] enableAddons completed in 1.073321037s
	I0801 17:18:07.146032   25967 pod_ready.go:102] pod "etcd-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:18:09.645373   25967 pod_ready.go:102] pod "etcd-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:18:11.645469   25967 pod_ready.go:102] pod "etcd-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:18:12.645007   25967 pod_ready.go:92] pod "etcd-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:18:12.645020   25967 pod_ready.go:81] duration metric: took 7.511449134s waiting for pod "etcd-pause-20220801171654-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:12.645026   25967 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220801171654-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:14.658775   25967 pod_ready.go:102] pod "kube-apiserver-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:18:17.157399   25967 pod_ready.go:102] pod "kube-apiserver-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:18:17.656764   25967 pod_ready.go:92] pod "kube-apiserver-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:18:17.656777   25967 pod_ready.go:81] duration metric: took 5.011672456s waiting for pod "kube-apiserver-pause-20220801171654-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:17.656784   25967 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220801171654-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:17.660956   25967 pod_ready.go:92] pod "kube-controller-manager-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:18:17.660964   25967 pod_ready.go:81] duration metric: took 4.175325ms waiting for pod "kube-controller-manager-pause-20220801171654-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:17.660969   25967 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7pzw6" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:17.664846   25967 pod_ready.go:92] pod "kube-proxy-7pzw6" in "kube-system" namespace has status "Ready":"True"
	I0801 17:18:17.664854   25967 pod_ready.go:81] duration metric: took 3.880664ms waiting for pod "kube-proxy-7pzw6" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:17.664861   25967 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220801171654-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:17.668668   25967 pod_ready.go:92] pod "kube-scheduler-pause-20220801171654-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:18:17.668676   25967 pod_ready.go:81] duration metric: took 3.8094ms waiting for pod "kube-scheduler-pause-20220801171654-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:18:17.668680   25967 pod_ready.go:38] duration metric: took 13.050816317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:18:17.668694   25967 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:18:17.668742   25967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:18:17.677908   25967 api_server.go:71] duration metric: took 13.291468263s to wait for apiserver process to appear ...
	I0801 17:18:17.677918   25967 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:18:17.677927   25967 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63806/healthz ...
	I0801 17:18:17.682976   25967 api_server.go:266] https://127.0.0.1:63806/healthz returned 200:
	ok
	I0801 17:18:17.684032   25967 api_server.go:140] control plane version: v1.24.3
	I0801 17:18:17.684041   25967 api_server.go:130] duration metric: took 6.118107ms to wait for apiserver health ...
	I0801 17:18:17.684045   25967 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:18:17.688200   25967 system_pods.go:59] 7 kube-system pods found
	I0801 17:18:17.688211   25967 system_pods.go:61] "coredns-6d4b75cb6d-2p796" [03919d6b-ad6e-4a7a-aad1-936670c554a1] Running
	I0801 17:18:17.688215   25967 system_pods.go:61] "etcd-pause-20220801171654-13911" [734057de-416b-4bfe-844b-86504ccc8d95] Running
	I0801 17:18:17.688219   25967 system_pods.go:61] "kube-apiserver-pause-20220801171654-13911" [57bcb747-7eb4-448c-8803-b08d64586c96] Running
	I0801 17:18:17.688222   25967 system_pods.go:61] "kube-controller-manager-pause-20220801171654-13911" [c6e08221-0002-4265-b33c-8abd1680f144] Running
	I0801 17:18:17.688226   25967 system_pods.go:61] "kube-proxy-7pzw6" [2cd2e4db-aabc-4373-a763-2e1075cd0b06] Running
	I0801 17:18:17.688229   25967 system_pods.go:61] "kube-scheduler-pause-20220801171654-13911" [7a7c39d0-9334-4264-a5a6-c69765358507] Running
	I0801 17:18:17.688235   25967 system_pods.go:61] "storage-provisioner" [8c3663ab-1017-4dd4-9db4-decada3e740d] Running
	I0801 17:18:17.688238   25967 system_pods.go:74] duration metric: took 4.189646ms to wait for pod list to return data ...
	I0801 17:18:17.688243   25967 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:18:17.690110   25967 default_sa.go:45] found service account: "default"
	I0801 17:18:17.690118   25967 default_sa.go:55] duration metric: took 1.872426ms for default service account to be created ...
	I0801 17:18:17.690122   25967 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 17:18:17.858365   25967 system_pods.go:86] 7 kube-system pods found
	I0801 17:18:17.858379   25967 system_pods.go:89] "coredns-6d4b75cb6d-2p796" [03919d6b-ad6e-4a7a-aad1-936670c554a1] Running
	I0801 17:18:17.858383   25967 system_pods.go:89] "etcd-pause-20220801171654-13911" [734057de-416b-4bfe-844b-86504ccc8d95] Running
	I0801 17:18:17.858387   25967 system_pods.go:89] "kube-apiserver-pause-20220801171654-13911" [57bcb747-7eb4-448c-8803-b08d64586c96] Running
	I0801 17:18:17.858391   25967 system_pods.go:89] "kube-controller-manager-pause-20220801171654-13911" [c6e08221-0002-4265-b33c-8abd1680f144] Running
	I0801 17:18:17.858394   25967 system_pods.go:89] "kube-proxy-7pzw6" [2cd2e4db-aabc-4373-a763-2e1075cd0b06] Running
	I0801 17:18:17.858398   25967 system_pods.go:89] "kube-scheduler-pause-20220801171654-13911" [7a7c39d0-9334-4264-a5a6-c69765358507] Running
	I0801 17:18:17.858415   25967 system_pods.go:89] "storage-provisioner" [8c3663ab-1017-4dd4-9db4-decada3e740d] Running
	I0801 17:18:17.858421   25967 system_pods.go:126] duration metric: took 168.293471ms to wait for k8s-apps to be running ...
	I0801 17:18:17.858430   25967 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 17:18:17.858479   25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:18:17.867717   25967 system_svc.go:56] duration metric: took 9.287926ms WaitForService to wait for kubelet.
	I0801 17:18:17.867731   25967 kubeadm.go:572] duration metric: took 13.481291203s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 17:18:17.867747   25967 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:18:18.053941   25967 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:18:18.053967   25967 node_conditions.go:123] node cpu capacity is 6
	I0801 17:18:18.053979   25967 node_conditions.go:105] duration metric: took 186.226128ms to run NodePressure ...
	I0801 17:18:18.053987   25967 start.go:216] waiting for startup goroutines ...
	I0801 17:18:18.083703   25967 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:18:18.105915   25967 out.go:177] * Done! kubectl is now configured to use "pause-20220801171654-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:17:01 UTC, end at Tue 2022-08-02 00:18:52 UTC. --
	Aug 02 00:17:47 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:47.690608960Z" level=info msg="ignoring event" container=df551da2027f70f9b64a2c59fb6da0b0c88e3d6f52f1505d3abd01a33295c50a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:17:47 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:47.698747211Z" level=info msg="ignoring event" container=4b7d7ca8f397a00d7a797a779b0331894e1a931007a50927972a3b7be0c474bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:17:47 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:47.699713468Z" level=info msg="ignoring event" container=4fa368bda0ab1cf6bab29e4d284d328acc98e1e210146bea85f4d51b13322bfa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:17:47 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:47.714707610Z" level=info msg="ignoring event" container=7ceab638a96536cb2914bd12618e677b1c2b53d5fc9686cc1861ffdfd91ed663 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:17:47 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:47.724665101Z" level=info msg="ignoring event" container=b226cd7c205fba66a1e04c5a9d2fd3df4e352c0e3432d1b8077e3927ab656b23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:17:48 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:48.772580345Z" level=info msg="ignoring event" container=46df2c9826ffb0da4e2c0c62c5ba2de0321dd3f0abcabff3453bf1407506066e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:17:57 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:57.612736632Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=9daeb6c5e40b10fe543f50ae5e84fce85277d0da58375b20c7676ab4f7b3f1a4
	Aug 02 00:17:57 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:57.652309962Z" level=info msg="ignoring event" container=9daeb6c5e40b10fe543f50ae5e84fce85277d0da58375b20c7676ab4f7b3f1a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:17:57 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:57.806330107Z" level=info msg="Removing stale sandbox c3cd140d8860b75327302b937ddc83996b8282465bf6d1d83c5652aac229c42c (4fa368bda0ab1cf6bab29e4d284d328acc98e1e210146bea85f4d51b13322bfa)"
	Aug 02 00:17:57 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:57.807649584Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 42496a3463812efb0137a7df1692207f97fccb162a3d9dbd471ec980ac7d2c74 b02a919e7363a149f05dec06b4dae9db6abe135a681937b2ac923d4175fbdd50], retrying...."
	Aug 02 00:17:57 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:57.895593805Z" level=info msg="Removing stale sandbox f1ea813e0303ed7eb0da33c70284bb504f57765655c89952008afeabbcb11487 (df551da2027f70f9b64a2c59fb6da0b0c88e3d6f52f1505d3abd01a33295c50a)"
	Aug 02 00:17:57 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:57.896855608Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 42496a3463812efb0137a7df1692207f97fccb162a3d9dbd471ec980ac7d2c74 33f98bc9e0f6697dc3829deb0393c944d19464387f3b39442ba01f1a182ced0d], retrying...."
	Aug 02 00:17:57 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:57.985227734Z" level=info msg="Removing stale sandbox 1df7db54bb348fdd165077fbe0ed3b7632eebed57d2adb98eaad2c59bad7f94c (38cb30691c4183ddb76880814cb654eed8f4634d9d35e3580dd74d219eec15df)"
	Aug 02 00:17:57 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:57.986420079Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 42496a3463812efb0137a7df1692207f97fccb162a3d9dbd471ec980ac7d2c74 1ffa2c571de95697b1fdb692784f4d5fd794b4efc25fc675bb623b4faf5ccc62], retrying...."
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.072698363Z" level=info msg="Removing stale sandbox 62e9e8b55ee17b67cfe6cee879b0324943c8986010659b53d5dcd67256289e91 (7ceab638a96536cb2914bd12618e677b1c2b53d5fc9686cc1861ffdfd91ed663)"
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.073918286Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 42496a3463812efb0137a7df1692207f97fccb162a3d9dbd471ec980ac7d2c74 b3b0acb01bbbd59271d670fb8a03e69cfe714dac7f9b7e1a611b94d1e391daa2], retrying...."
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.095698889Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.130252933Z" level=info msg="Loading containers: done."
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.140010244Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.140078520Z" level=info msg="Daemon has completed initialization"
	Aug 02 00:17:58 pause-20220801171654-13911 systemd[1]: Started Docker Application Container Engine.
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.163007270Z" level=info msg="API listen on [::]:2376"
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.168552987Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.227507580Z" level=error msg="Failed to compute size of container rootfs dc09836f1088911d8956656cc12b7be20447549cf87844789faba773e1188cc9: mount does not exist"
	Aug 02 00:17:58 pause-20220801171654-13911 dockerd[3790]: time="2022-08-02T00:17:58.298301562Z" level=error msg="Failed to compute size of container rootfs 1a5442c80084b2bec4dded4e67d360a2383765699ec24da1053834e2d13319b0: mount does not exist"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	0518594806d69       6e38f40d628db       47 seconds ago       Running             storage-provisioner       0                   d6a13dff66772
	9b2d19883e5ac       aebe758cef4cd       53 seconds ago       Running             etcd                      2                   cf5223fb45b1f
	8bc2434ac4167       586c112956dfc       53 seconds ago       Running             kube-controller-manager   2                   5f694eb062eb5
	748fe6b5ee5bf       a4ca41631cc7a       54 seconds ago       Running             coredns                   1                   1ed1d44af729e
	153e0d57d46fa       3a5aa3a515f5d       54 seconds ago       Running             kube-scheduler            2                   904137fde151b
	2af40133fe09e       d521dd763e2e3       54 seconds ago       Running             kube-apiserver            2                   3fdf27582776d
	110078ba735b4       2ae1ba6417cbc       54 seconds ago       Running             kube-proxy                1                   091fbf3ccb817
	46df2c9826ffb       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            1                   4fa368bda0ab1
	b226cd7c205fb       aebe758cef4cd       About a minute ago   Exited              etcd                      1                   df551da2027f7
	9daeb6c5e40b1       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            1                   7ceab638a9653
	4b7d7ca8f397a       586c112956dfc       About a minute ago   Exited              kube-controller-manager   1                   38cb30691c418
	13ca3c1deeae9       a4ca41631cc7a       About a minute ago   Exited              coredns                   0                   c74e6265070fd
	341ce1e8d8233       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                0                   2721e88a50cff
	
	* 
	* ==> coredns [13ca3c1deeae] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [748fe6b5ee5b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001434] FS-Cache: O-key=[8] 'a336070400000000'
	[  +0.001106] FS-Cache: N-cookie c=000000006143e049 [p=000000003e51d12b fl=2 nc=0 na=1]
	[  +0.001736] FS-Cache: N-cookie d=00000000c02ab632 n=000000007370561b
	[  +0.001442] FS-Cache: N-key=[8] 'a336070400000000'
	[  +0.001754] FS-Cache: Duplicate cookie detected
	[  +0.001081] FS-Cache: O-cookie c=000000001fabbad4 [p=000000003e51d12b fl=226 nc=0 na=1]
	[  +0.001802] FS-Cache: O-cookie d=00000000c02ab632 n=000000000ab19c89
	[  +0.001454] FS-Cache: O-key=[8] 'a336070400000000'
	[  +0.001113] FS-Cache: N-cookie c=000000006143e049 [p=000000003e51d12b fl=2 nc=0 na=1]
	[  +0.001743] FS-Cache: N-cookie d=00000000c02ab632 n=00000000c182bdf9
	[  +0.001441] FS-Cache: N-key=[8] 'a336070400000000'
	[  +3.097702] FS-Cache: Duplicate cookie detected
	[  +0.001034] FS-Cache: O-cookie c=000000006d9e8dba [p=000000003e51d12b fl=226 nc=0 na=1]
	[  +0.001834] FS-Cache: O-cookie d=00000000c02ab632 n=00000000c6a910ed
	[  +0.001479] FS-Cache: O-key=[8] 'a236070400000000'
	[  +0.001160] FS-Cache: N-cookie c=00000000f4913f44 [p=000000003e51d12b fl=2 nc=0 na=1]
	[  +0.001828] FS-Cache: N-cookie d=00000000c02ab632 n=00000000c182bdf9
	[  +0.001431] FS-Cache: N-key=[8] 'a236070400000000'
	[  +0.448495] FS-Cache: Duplicate cookie detected
	[  +0.001028] FS-Cache: O-cookie c=00000000ea6aea4f [p=000000003e51d12b fl=226 nc=0 na=1]
	[  +0.001781] FS-Cache: O-cookie d=00000000c02ab632 n=00000000f2ac4d13
	[  +0.001449] FS-Cache: O-key=[8] 'aa36070400000000'
	[  +0.001099] FS-Cache: N-cookie c=0000000088a2da56 [p=000000003e51d12b fl=2 nc=0 na=1]
	[  +0.001742] FS-Cache: N-cookie d=00000000c02ab632 n=00000000f06bebfc
	[  +0.001427] FS-Cache: N-key=[8] 'aa36070400000000'
	
	* 
	* ==> etcd [9b2d19883e5a] <==
	* {"level":"info","ts":"2022-08-02T00:18:00.099Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-08-02T00:18:00.099Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-08-02T00:18:00.100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-08-02T00:18:00.100Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-08-02T00:18:00.100Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:18:00.100Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:18:00.103Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:18:00.103Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:18:00.103Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:18:00.103Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:18:00.103Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:18:01.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-08-02T00:18:01.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-08-02T00:18:01.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-08-02T00:18:01.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-08-02T00:18:01.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-08-02T00:18:01.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-08-02T00:18:01.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-08-02T00:18:01.132Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:18:01.132Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:18:01.132Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220801171654-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:18:01.133Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:18:01.133Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:18:01.134Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:18:01.135Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> etcd [b226cd7c205f] <==
	* {"level":"info","ts":"2022-08-02T00:17:44.124Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:17:44.125Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:17:44.125Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:17:45.620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-08-02T00:17:45.620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:17:45.620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:17:45.620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-08-02T00:17:45.620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-08-02T00:17:45.620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-08-02T00:17:45.620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-08-02T00:17:45.621Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220801171654-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:17:45.621Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:17:45.621Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:17:45.622Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:17:45.622Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:17:45.623Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:17:45.623Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:17:47.621Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-08-02T00:17:47.621Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220801171654-13911","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/08/02 00:17:47 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/08/02 00:17:47 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-08-02T00:17:47.692Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-08-02T00:17:47.694Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:17:47.695Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:17:47.695Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220801171654-13911","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  00:19:02 up 44 min,  0 users,  load average: 0.77, 1.18, 0.98
	Linux pause-20220801171654-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2af40133fe09] <==
	* I0802 00:18:02.817038       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0802 00:18:02.817045       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0802 00:18:02.817054       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0802 00:18:02.817289       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0802 00:18:02.817314       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0802 00:18:02.817447       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0802 00:18:02.817454       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0802 00:18:02.817614       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0802 00:18:02.818005       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0802 00:18:02.895957       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0802 00:18:02.897587       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 00:18:02.917383       1 cache.go:39] Caches are synced for autoregister controller
	I0802 00:18:02.917537       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 00:18:02.917711       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0802 00:18:02.917824       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0802 00:18:02.920083       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:18:02.930059       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0802 00:18:03.589202       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0802 00:18:03.822479       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 00:18:04.807905       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:18:05.325586       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:18:05.337048       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 00:18:05.342108       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 00:18:05.347900       1 controller.go:611] quota admission added evaluator for: endpoints
	I0802 00:18:15.139212       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [46df2c9826ff] <==
	* W0802 00:17:48.625677       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625698       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625718       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625831       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625896       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625921       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625942       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625952       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625968       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625976       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625979       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625988       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625994       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625969       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626010       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626011       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.625999       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626017       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626036       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626096       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626099       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626100       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626097       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626124       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:17:48.626246       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [4b7d7ca8f397] <==
	* I0802 00:17:43.420616       1 serving.go:348] Generated self-signed cert in-memory
	I0802 00:17:43.638831       1 controllermanager.go:180] Version: v1.24.3
	I0802 00:17:43.638864       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:17:43.639840       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0802 00:17:43.639866       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0802 00:17:43.639852       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0802 00:17:43.640036       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [8bc2434ac416] <==
	* I0802 00:18:15.134583       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0802 00:18:15.136928       1 shared_informer.go:262] Caches are synced for GC
	I0802 00:18:15.136975       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0802 00:18:15.137052       1 shared_informer.go:262] Caches are synced for expand
	I0802 00:18:15.158327       1 shared_informer.go:262] Caches are synced for stateful set
	I0802 00:18:15.160709       1 shared_informer.go:262] Caches are synced for cronjob
	I0802 00:18:15.163135       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0802 00:18:15.163199       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0802 00:18:15.163258       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0802 00:18:15.163277       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0802 00:18:15.165690       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0802 00:18:15.167185       1 shared_informer.go:262] Caches are synced for job
	I0802 00:18:15.167233       1 shared_informer.go:262] Caches are synced for taint
	I0802 00:18:15.167283       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0802 00:18:15.167307       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0802 00:18:15.167385       1 node_lifecycle_controller.go:1014] Missing timestamp for Node pause-20220801171654-13911. Assuming now as a timestamp.
	I0802 00:18:15.167430       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0802 00:18:15.167505       1 event.go:294] "Event occurred" object="pause-20220801171654-13911" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220801171654-13911 event: Registered Node pause-20220801171654-13911 in Controller"
	I0802 00:18:15.253413       1 shared_informer.go:262] Caches are synced for HPA
	I0802 00:18:15.268544       1 shared_informer.go:262] Caches are synced for attach detach
	I0802 00:18:15.377595       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:18:15.403731       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:18:15.791305       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:18:15.834683       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:18:15.834715       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [110078ba735b] <==
	* E0802 00:17:58.788632       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220801171654-13911": dial tcp 192.168.67.2:8443: connect: connection refused
	I0802 00:18:02.895679       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:18:02.895722       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:18:02.895762       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:18:02.915845       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:18:02.915883       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:18:02.915890       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:18:02.915899       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:18:02.915922       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:18:02.916185       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:18:02.917203       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:18:02.917210       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:18:02.918670       1 config.go:317] "Starting service config controller"
	I0802 00:18:02.918743       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:18:02.918666       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:18:02.918814       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:18:02.918846       1 config.go:444] "Starting node config controller"
	I0802 00:18:02.918912       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:18:03.019552       1 shared_informer.go:262] Caches are synced for node config
	I0802 00:18:03.019599       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 00:18:03.019760       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [341ce1e8d823] <==
	* I0802 00:17:34.927086       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:17:34.927189       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:17:34.927230       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:17:34.948345       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:17:34.948364       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:17:34.948371       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:17:34.948379       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:17:34.948398       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:17:34.948568       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:17:34.949212       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:17:34.949225       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:17:34.949767       1 config.go:317] "Starting service config controller"
	I0802 00:17:34.949781       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:17:34.949807       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:17:34.949810       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:17:34.950459       1 config.go:444] "Starting node config controller"
	I0802 00:17:34.950488       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:17:35.050582       1 shared_informer.go:262] Caches are synced for node config
	I0802 00:17:35.050671       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 00:17:35.050689       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [153e0d57d46f] <==
	* I0802 00:17:59.803823       1 serving.go:348] Generated self-signed cert in-memory
	W0802 00:18:02.818502       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 00:18:02.818585       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:18:02.818603       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 00:18:02.818615       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 00:18:02.887422       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0802 00:18:02.887495       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:18:02.890889       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0802 00:18:02.891322       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 00:18:02.891383       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:18:02.891409       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 00:18:02.991582       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [9daeb6c5e40b] <==
	* I0802 00:17:43.339815       1 serving.go:348] Generated self-signed cert in-memory
	W0802 00:17:47.290329       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 00:17:47.290395       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:17:47.290403       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 00:17:47.290410       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 00:17:47.386678       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0802 00:17:47.387283       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:17:47.392355       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0802 00:17:47.393189       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 00:17:47.393202       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:17:47.393218       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 00:17:47.493993       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:17:47.609975       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0802 00:17:47.610064       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0802 00:17:47.610294       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:17:01 UTC, end at Tue 2022-08-02 00:19:04 UTC. --
	Aug 02 00:17:56 pause-20220801171654-13911 kubelet[1942]: W0802 00:17:56.812103    1942 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=336": dial tcp 192.168.67.2:8443: connect: connection refused
	Aug 02 00:17:56 pause-20220801171654-13911 kubelet[1942]: E0802 00:17:56.812201    1942 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=336": dial tcp 192.168.67.2:8443: connect: connection refused
	Aug 02 00:17:56 pause-20220801171654-13911 kubelet[1942]: W0802 00:17:56.911462    1942 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=259": dial tcp 192.168.67.2:8443: connect: connection refused
	Aug 02 00:17:56 pause-20220801171654-13911 kubelet[1942]: E0802 00:17:56.911555    1942 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=259": dial tcp 192.168.67.2:8443: connect: connection refused
	Aug 02 00:17:57 pause-20220801171654-13911 kubelet[1942]: W0802 00:17:57.083398    1942 reflector.go:324] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)pause-20220801171654-13911&resourceVersion=392": dial tcp 192.168.67.2:8443: connect: connection refused
	Aug 02 00:17:57 pause-20220801171654-13911 kubelet[1942]: E0802 00:17:57.083446    1942 reflector.go:138] pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)pause-20220801171654-13911&resourceVersion=392": dial tcp 192.168.67.2:8443: connect: connection refused
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.196623    1942 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7ceab638a96536cb2914bd12618e677b1c2b53d5fc9686cc1861ffdfd91ed663"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.196658    1942 scope.go:110] "RemoveContainer" containerID="9c7c574696799a02899a40c680181432a628abd077cff5606728b05792e12791"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.197208    1942 status_manager.go:664] "Failed to get status for pod" podUID=b9d107a40a525c99c1eabfc32128f560 pod="kube-system/kube-scheduler-pause-20220801171654-13911" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-20220801171654-13911\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.214844    1942 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="38cb30691c4183ddb76880814cb654eed8f4634d9d35e3580dd74d219eec15df"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.214865    1942 scope.go:110] "RemoveContainer" containerID="dc09836f1088911d8956656cc12b7be20447549cf87844789faba773e1188cc9"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.288924    1942 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="df551da2027f70f9b64a2c59fb6da0b0c88e3d6f52f1505d3abd01a33295c50a"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.288990    1942 scope.go:110] "RemoveContainer" containerID="1a5442c80084b2bec4dded4e67d360a2383765699ec24da1053834e2d13319b0"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.306075    1942 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4fa368bda0ab1cf6bab29e4d284d328acc98e1e210146bea85f4d51b13322bfa"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.306113    1942 scope.go:110] "RemoveContainer" containerID="17e71ee3deebecfbc0333f6c02ed22075d8dc504e0e9f762bf546299f1118493"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: I0802 00:17:58.306514    1942 status_manager.go:664] "Failed to get status for pod" podUID=361a21f86353fdedb0e40242f8bfafb0 pod="kube-system/kube-apiserver-pause-20220801171654-13911" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20220801171654-13911\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Aug 02 00:17:58 pause-20220801171654-13911 kubelet[1942]: E0802 00:17:58.402511    1942 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20220801171654-13911?timeout=10s": dial tcp 192.168.67.2:8443: connect: connection refused
	Aug 02 00:18:05 pause-20220801171654-13911 kubelet[1942]: I0802 00:18:05.360688    1942 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 00:18:05 pause-20220801171654-13911 kubelet[1942]: I0802 00:18:05.505477    1942 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8c3663ab-1017-4dd4-9db4-decada3e740d-tmp\") pod \"storage-provisioner\" (UID: \"8c3663ab-1017-4dd4-9db4-decada3e740d\") " pod="kube-system/storage-provisioner"
	Aug 02 00:18:05 pause-20220801171654-13911 kubelet[1942]: I0802 00:18:05.505550    1942 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdbds\" (UniqueName: \"kubernetes.io/projected/8c3663ab-1017-4dd4-9db4-decada3e740d-kube-api-access-zdbds\") pod \"storage-provisioner\" (UID: \"8c3663ab-1017-4dd4-9db4-decada3e740d\") " pod="kube-system/storage-provisioner"
	Aug 02 00:18:18 pause-20220801171654-13911 kubelet[1942]: I0802 00:18:18.697888    1942 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 02 00:18:18 pause-20220801171654-13911 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 02 00:18:18 pause-20220801171654-13911 systemd[1]: kubelet.service: Succeeded.
	Aug 02 00:18:18 pause-20220801171654-13911 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 00:18:18 pause-20220801171654-13911 systemd[1]: kubelet.service: Consumed 1.702s CPU time.
	
	* 
	* ==> storage-provisioner [0518594806d6] <==
	* I0802 00:18:05.848595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 00:18:05.856421       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 00:18:05.856498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 00:18:05.864771       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 00:18:05.864898       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220801171654-13911_e1fa4150-7b8b-4621-b02f-4c023b52223a!
	I0802 00:18:05.864936       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"67e190f7-ef21-42ed-9312-8701b6e5294d", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220801171654-13911_e1fa4150-7b8b-4621-b02f-4c023b52223a became leader
	I0802 00:18:05.965728       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220801171654-13911_e1fa4150-7b8b-4621-b02f-4c023b52223a!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:19:02.498226   26130 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220801171654-13911 -n pause-20220801171654-13911
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220801171654-13911 -n pause-20220801171654-13911: exit status 2 (16.186772135s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220801171654-13911" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (61.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (252.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220801172716-13911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0801 17:27:21.475714   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:27:37.499102   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 17:27:39.135609   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 17:27:50.318707   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:50.325055   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:50.335386   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:50.357559   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:50.398288   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:50.479572   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:50.640827   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:50.960933   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:51.601078   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:52.882981   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:27:55.444605   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:28:00.564811   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220801172716-13911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m11.668697595s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220801172716-13911] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-20220801172716-13911 in cluster old-k8s-version-20220801172716-13911
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 17:27:17.029215   29409 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:27:17.029453   29409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:27:17.029459   29409 out.go:309] Setting ErrFile to fd 2...
	I0801 17:27:17.029463   29409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:27:17.029570   29409 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:27:17.030101   29409 out.go:303] Setting JSON to false
	I0801 17:27:17.046033   29409 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8808,"bootTime":1659391229,"procs":380,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:27:17.046149   29409 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:27:17.068097   29409 out.go:177] * [old-k8s-version-20220801172716-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:27:17.110302   29409 notify.go:193] Checking for updates...
	I0801 17:27:17.132222   29409 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:27:17.174206   29409 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:27:17.217018   29409 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:27:17.252683   29409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:27:17.350183   29409 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:27:17.388001   29409 config.go:180] Loaded profile config "kubenet-20220801171037-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:27:17.388106   29409 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:27:17.457362   29409 docker.go:137] docker version: linux-20.10.17
	I0801 17:27:17.457498   29409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:27:17.594455   29409 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2022-08-02 00:27:17.5236131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:27:17.619267   29409 out.go:177] * Using the docker driver based on user configuration
	I0801 17:27:17.640325   29409 start.go:284] selected driver: docker
	I0801 17:27:17.640348   29409 start.go:808] validating driver "docker" against <nil>
	I0801 17:27:17.640369   29409 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:27:17.642547   29409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:27:17.778000   29409 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:59 SystemTime:2022-08-02 00:27:17.707239503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:27:17.778123   29409 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0801 17:27:17.778276   29409 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:27:17.806903   29409 out.go:177] * Using Docker Desktop driver with root privileges
	I0801 17:27:17.827088   29409 cni.go:95] Creating CNI manager for ""
	I0801 17:27:17.827125   29409 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:27:17.827143   29409 start_flags.go:310] config:
	{Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:27:17.848819   29409 out.go:177] * Starting control plane node old-k8s-version-20220801172716-13911 in cluster old-k8s-version-20220801172716-13911
	I0801 17:27:17.890936   29409 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:27:17.927947   29409 out.go:177] * Pulling base image ...
	I0801 17:27:17.985853   29409 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:27:17.985857   29409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:27:17.985933   29409 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0801 17:27:17.985952   29409 cache.go:57] Caching tarball of preloaded images
	I0801 17:27:17.986160   29409 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:27:17.986176   29409 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0801 17:27:17.987181   29409 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json ...
	I0801 17:27:17.987319   29409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json: {Name:mk3e29e0882b46e9c2d1f24ef99bd644b49cf327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:27:18.050619   29409 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:27:18.050637   29409 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:27:18.050647   29409 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:27:18.050686   29409 start.go:371] acquiring machines lock for old-k8s-version-20220801172716-13911: {Name:mkbe9b0aeba6b12111b317502f6798dbe4170df1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:27:18.050818   29409 start.go:375] acquired machines lock for "old-k8s-version-20220801172716-13911" in 121.265µs
	I0801 17:27:18.050844   29409 start.go:92] Provisioning new machine with config: &{Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:27:18.050911   29409 start.go:132] createHost starting for "" (driver="docker")
	I0801 17:27:18.094594   29409 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0801 17:27:18.094950   29409 start.go:166] libmachine.API.Create for "old-k8s-version-20220801172716-13911" (driver="docker")
	I0801 17:27:18.095002   29409 client.go:168] LocalClient.Create starting
	I0801 17:27:18.095116   29409 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem
	I0801 17:27:18.095176   29409 main.go:134] libmachine: Decoding PEM data...
	I0801 17:27:18.095202   29409 main.go:134] libmachine: Parsing certificate...
	I0801 17:27:18.095283   29409 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem
	I0801 17:27:18.095337   29409 main.go:134] libmachine: Decoding PEM data...
	I0801 17:27:18.095356   29409 main.go:134] libmachine: Parsing certificate...
	I0801 17:27:18.096097   29409 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220801172716-13911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0801 17:27:18.161920   29409 cli_runner.go:211] docker network inspect old-k8s-version-20220801172716-13911 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0801 17:27:18.162012   29409 network_create.go:272] running [docker network inspect old-k8s-version-20220801172716-13911] to gather additional debugging logs...
	I0801 17:27:18.162032   29409 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220801172716-13911
	W0801 17:27:18.226131   29409 cli_runner.go:211] docker network inspect old-k8s-version-20220801172716-13911 returned with exit code 1
	I0801 17:27:18.226156   29409 network_create.go:275] error running [docker network inspect old-k8s-version-20220801172716-13911]: docker network inspect old-k8s-version-20220801172716-13911: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220801172716-13911
	I0801 17:27:18.226181   29409 network_create.go:277] output of [docker network inspect old-k8s-version-20220801172716-13911]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220801172716-13911
	
	** /stderr **
	I0801 17:27:18.226252   29409 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0801 17:27:18.291213   29409 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004126b8] misses:0}
	I0801 17:27:18.291253   29409 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:27:18.291270   29409 network_create.go:115] attempt to create docker network old-k8s-version-20220801172716-13911 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0801 17:27:18.291338   29409 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 old-k8s-version-20220801172716-13911
	W0801 17:27:18.360343   29409 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 old-k8s-version-20220801172716-13911 returned with exit code 1
	W0801 17:27:18.360380   29409 network_create.go:107] failed to create docker network old-k8s-version-20220801172716-13911 192.168.49.0/24, will retry: subnet is taken
	I0801 17:27:18.360667   29409 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004126b8] amended:false}} dirty:map[] misses:0}
	I0801 17:27:18.360685   29409 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:27:18.360892   29409 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004126b8] amended:true}} dirty:map[192.168.49.0:0xc0004126b8 192.168.58.0:0xc00040e3e8] misses:0}
	I0801 17:27:18.360912   29409 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:27:18.360920   29409 network_create.go:115] attempt to create docker network old-k8s-version-20220801172716-13911 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0801 17:27:18.360984   29409 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 old-k8s-version-20220801172716-13911
	W0801 17:27:18.429710   29409 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 old-k8s-version-20220801172716-13911 returned with exit code 1
	W0801 17:27:18.429754   29409 network_create.go:107] failed to create docker network old-k8s-version-20220801172716-13911 192.168.58.0/24, will retry: subnet is taken
	I0801 17:27:18.430039   29409 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004126b8] amended:true}} dirty:map[192.168.49.0:0xc0004126b8 192.168.58.0:0xc00040e3e8] misses:1}
	I0801 17:27:18.430057   29409 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:27:18.430292   29409 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004126b8] amended:true}} dirty:map[192.168.49.0:0xc0004126b8 192.168.58.0:0xc00040e3e8 192.168.67.0:0xc000f284b0] misses:1}
	I0801 17:27:18.430308   29409 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:27:18.430317   29409 network_create.go:115] attempt to create docker network old-k8s-version-20220801172716-13911 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0801 17:27:18.430384   29409 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 old-k8s-version-20220801172716-13911
	W0801 17:27:18.498528   29409 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 old-k8s-version-20220801172716-13911 returned with exit code 1
	W0801 17:27:18.498563   29409 network_create.go:107] failed to create docker network old-k8s-version-20220801172716-13911 192.168.67.0/24, will retry: subnet is taken
	I0801 17:27:18.498828   29409 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004126b8] amended:true}} dirty:map[192.168.49.0:0xc0004126b8 192.168.58.0:0xc00040e3e8 192.168.67.0:0xc000f284b0] misses:2}
	I0801 17:27:18.498846   29409 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:27:18.499067   29409 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004126b8] amended:true}} dirty:map[192.168.49.0:0xc0004126b8 192.168.58.0:0xc00040e3e8 192.168.67.0:0xc000f284b0 192.168.76.0:0xc000f284e8] misses:2}
	I0801 17:27:18.499080   29409 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 17:27:18.499087   29409 network_create.go:115] attempt to create docker network old-k8s-version-20220801172716-13911 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0801 17:27:18.499149   29409 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 old-k8s-version-20220801172716-13911
	I0801 17:27:18.611000   29409 network_create.go:99] docker network old-k8s-version-20220801172716-13911 192.168.76.0/24 created
	I0801 17:27:18.611033   29409 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-20220801172716-13911" container
	I0801 17:27:18.611152   29409 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0801 17:27:18.680998   29409 cli_runner.go:164] Run: docker volume create old-k8s-version-20220801172716-13911 --label name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 --label created_by.minikube.sigs.k8s.io=true
	I0801 17:27:18.753329   29409 oci.go:103] Successfully created a docker volume old-k8s-version-20220801172716-13911
	I0801 17:27:18.753471   29409 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220801172716-13911-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 --entrypoint /usr/bin/test -v old-k8s-version-20220801172716-13911:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib
	I0801 17:27:19.487702   29409 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220801172716-13911
	I0801 17:27:19.487747   29409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:27:19.487762   29409 kic.go:179] Starting extracting preloaded images to volume ...
	I0801 17:27:19.487853   29409 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220801172716-13911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0801 17:27:24.301649   29409 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220801172716-13911:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.813667506s)
	I0801 17:27:24.301675   29409 kic.go:188] duration metric: took 4.813859 seconds to extract preloaded images to volume
	I0801 17:27:24.301777   29409 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0801 17:27:24.440753   29409 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220801172716-13911 --name old-k8s-version-20220801172716-13911 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220801172716-13911 --network old-k8s-version-20220801172716-13911 --ip 192.168.76.2 --volume old-k8s-version-20220801172716-13911:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8
	I0801 17:27:24.984380   29409 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Running}}
	I0801 17:27:25.074414   29409 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:27:25.162879   29409 cli_runner.go:164] Run: docker exec old-k8s-version-20220801172716-13911 stat /var/lib/dpkg/alternatives/iptables
	I0801 17:27:25.301851   29409 oci.go:144] the created container "old-k8s-version-20220801172716-13911" has a running status.
	I0801 17:27:25.301881   29409 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa...
	I0801 17:27:25.345827   29409 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0801 17:27:25.466060   29409 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:27:25.541364   29409 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0801 17:27:25.541385   29409 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220801172716-13911 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0801 17:27:25.672815   29409 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:27:25.745831   29409 machine.go:88] provisioning docker machine ...
	I0801 17:27:25.745882   29409 ubuntu.go:169] provisioning hostname "old-k8s-version-20220801172716-13911"
	I0801 17:27:25.745990   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:25.817179   29409 main.go:134] libmachine: Using SSH client type: native
	I0801 17:27:25.817379   29409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50175 <nil> <nil>}
	I0801 17:27:25.817395   29409 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220801172716-13911 && echo "old-k8s-version-20220801172716-13911" | sudo tee /etc/hostname
	I0801 17:27:25.938824   29409 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220801172716-13911
	
	I0801 17:27:25.938924   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:26.010484   29409 main.go:134] libmachine: Using SSH client type: native
	I0801 17:27:26.010636   29409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50175 <nil> <nil>}
	I0801 17:27:26.010665   29409 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220801172716-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220801172716-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220801172716-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:27:26.122370   29409 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:27:26.122393   29409 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:27:26.122417   29409 ubuntu.go:177] setting up certificates
	I0801 17:27:26.122427   29409 provision.go:83] configureAuth start
	I0801 17:27:26.122521   29409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:27:26.193894   29409 provision.go:138] copyHostCerts
	I0801 17:27:26.193983   29409 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:27:26.193993   29409 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:27:26.194093   29409 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:27:26.194284   29409 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:27:26.194294   29409 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:27:26.194359   29409 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:27:26.194493   29409 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:27:26.194502   29409 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:27:26.194569   29409 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:27:26.194755   29409 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220801172716-13911 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220801172716-13911]
	I0801 17:27:26.280243   29409 provision.go:172] copyRemoteCerts
	I0801 17:27:26.280305   29409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:27:26.280356   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:26.351397   29409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50175 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:27:26.433446   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:27:26.451661   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0801 17:27:26.468724   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:27:26.486810   29409 provision.go:86] duration metric: configureAuth took 364.366498ms
	I0801 17:27:26.486823   29409 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:27:26.486959   29409 config.go:180] Loaded profile config "old-k8s-version-20220801172716-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0801 17:27:26.487017   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:26.558602   29409 main.go:134] libmachine: Using SSH client type: native
	I0801 17:27:26.558769   29409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50175 <nil> <nil>}
	I0801 17:27:26.558790   29409 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:27:26.672406   29409 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:27:26.672418   29409 ubuntu.go:71] root file system type: overlay
	I0801 17:27:26.672575   29409 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:27:26.672655   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:26.745236   29409 main.go:134] libmachine: Using SSH client type: native
	I0801 17:27:26.745389   29409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50175 <nil> <nil>}
	I0801 17:27:26.745442   29409 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:27:26.864739   29409 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:27:26.864817   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:26.937160   29409 main.go:134] libmachine: Using SSH client type: native
	I0801 17:27:26.937303   29409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50175 <nil> <nil>}
	I0801 17:27:26.937316   29409 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:27:27.509244   29409 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-08-02 00:27:26.875596118 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0801 17:27:27.509275   29409 machine.go:91] provisioned docker machine in 1.763407147s
	I0801 17:27:27.509284   29409 client.go:171] LocalClient.Create took 9.414172324s
	I0801 17:27:27.509300   29409 start.go:174] duration metric: libmachine.API.Create for "old-k8s-version-20220801172716-13911" took 9.41424859s
	I0801 17:27:27.509307   29409 start.go:307] post-start starting for "old-k8s-version-20220801172716-13911" (driver="docker")
	I0801 17:27:27.509317   29409 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:27:27.509392   29409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:27:27.509444   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:27.581559   29409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50175 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:27:27.666159   29409 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:27:27.670192   29409 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:27:27.670209   29409 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:27:27.670216   29409 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:27:27.670221   29409 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:27:27.670231   29409 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:27:27.670338   29409 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:27:27.670492   29409 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:27:27.670656   29409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:27:27.678280   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:27:27.696246   29409 start.go:310] post-start completed in 186.922845ms
	I0801 17:27:27.696733   29409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:27:27.770173   29409 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json ...
	I0801 17:27:27.770576   29409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:27:27.770627   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:27.842767   29409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50175 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:27:27.922181   29409 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:27:27.926886   29409 start.go:135] duration metric: createHost completed in 9.875856711s
	I0801 17:27:27.926905   29409 start.go:82] releasing machines lock for "old-k8s-version-20220801172716-13911", held for 9.875969627s
	I0801 17:27:27.926987   29409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:27:28.010549   29409 ssh_runner.go:195] Run: systemctl --version
	I0801 17:27:28.010551   29409 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:27:28.010608   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:28.010622   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:28.091239   29409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50175 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:27:28.094898   29409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50175 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:27:28.366956   29409 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:27:28.380185   29409 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:27:28.380248   29409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:27:28.389386   29409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:27:28.402198   29409 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:27:28.463877   29409 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:27:28.535480   29409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:27:28.604990   29409 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:27:28.834418   29409 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:27:28.871116   29409 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:27:28.958158   29409 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0801 17:27:28.958280   29409 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220801172716-13911 dig +short host.docker.internal
	I0801 17:27:29.084993   29409 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:27:29.085097   29409 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:27:29.089395   29409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:27:29.099279   29409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:27:29.173199   29409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:27:29.173263   29409 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:27:29.203744   29409 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:27:29.203759   29409 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:27:29.203825   29409 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:27:29.235250   29409 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:27:29.235274   29409 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:27:29.235918   29409 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:27:29.321239   29409 cni.go:95] Creating CNI manager for ""
	I0801 17:27:29.321252   29409 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:27:29.321264   29409 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:27:29.321278   29409 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220801172716-13911 NodeName:old-k8s-version-20220801172716-13911 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:27:29.321384   29409 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220801172716-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220801172716-13911
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:27:29.321456   29409 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220801172716-13911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:27:29.321526   29409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0801 17:27:29.338562   29409 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:27:29.338626   29409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:27:29.347183   29409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0801 17:27:29.362835   29409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:27:29.375372   29409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0801 17:27:29.389039   29409 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:27:29.392909   29409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:27:29.403897   29409 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911 for IP: 192.168.76.2
	I0801 17:27:29.403990   29409 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:27:29.404040   29409 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:27:29.404078   29409 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.key
	I0801 17:27:29.404092   29409 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.crt with IP's: []
	I0801 17:27:29.660348   29409 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.crt ...
	I0801 17:27:29.660362   29409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.crt: {Name:mkcd0d7a89e25e76af33427ab28101c68b245d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:27:29.660749   29409 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.key ...
	I0801 17:27:29.660760   29409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.key: {Name:mk846509a4b47df78929254062f74e4623c87231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:27:29.661006   29409 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key.31bdca25
	I0801 17:27:29.661024   29409 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0801 17:27:29.698089   29409 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt.31bdca25 ...
	I0801 17:27:29.698100   29409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt.31bdca25: {Name:mk690d16d2b5b37369b51dfcd196d4a4c6ef9217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:27:29.698376   29409 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key.31bdca25 ...
	I0801 17:27:29.698384   29409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key.31bdca25: {Name:mkc356c0aad5fd94b2857abcc304aefb71eb526e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:27:29.698581   29409 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt
	I0801 17:27:29.698741   29409 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key
	I0801 17:27:29.698917   29409 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key
	I0801 17:27:29.698935   29409 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.crt with IP's: []
	I0801 17:27:29.824419   29409 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.crt ...
	I0801 17:27:29.824442   29409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.crt: {Name:mk75268efac535739c3374329bf9cfa043ea365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:27:29.824868   29409 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key ...
	I0801 17:27:29.824887   29409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key: {Name:mkb8a1b43862fe08ddb9fffc0e6591f35327f0d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:27:29.825433   29409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:27:29.825474   29409 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:27:29.825483   29409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:27:29.825511   29409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:27:29.825543   29409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:27:29.825573   29409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:27:29.825636   29409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:27:29.826051   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:27:29.844868   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:27:29.863258   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:27:29.880990   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:27:29.898887   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:27:29.916536   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:27:29.935588   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:27:29.954944   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:27:29.974512   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:27:29.994294   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:27:30.014244   29409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:27:30.035561   29409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:27:30.049862   29409 ssh_runner.go:195] Run: openssl version
	I0801 17:27:30.056165   29409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:27:30.064428   29409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:27:30.068118   29409 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:27:30.068162   29409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:27:30.073520   29409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:27:30.081928   29409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:27:30.090153   29409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:27:30.094178   29409 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:27:30.094228   29409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:27:30.099919   29409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:27:30.108017   29409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:27:30.116357   29409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:27:30.120503   29409 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:27:30.120546   29409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:27:30.126162   29409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:27:30.134391   29409 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:27:30.134487   29409 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:27:30.166487   29409 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:27:30.174293   29409 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:27:30.181983   29409 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:27:30.182036   29409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:27:30.189499   29409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:27:30.189521   29409 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:27:31.024940   29409 out.go:204]   - Generating certificates and keys ...
	I0801 17:27:33.688442   29409 out.go:204]   - Booting up control plane ...
	W0801 17:29:28.628289   29409 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220801172716-13911 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220801172716-13911 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220801172716-13911 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220801172716-13911 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 17:29:28.628335   29409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 17:29:29.051888   29409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:29:29.061714   29409 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:29:29.061778   29409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:29:29.071331   29409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:29:29.071356   29409 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:29:29.852809   29409 out.go:204]   - Generating certificates and keys ...
	I0801 17:29:31.093651   29409 out.go:204]   - Booting up control plane ...
	I0801 17:31:26.010888   29409 kubeadm.go:397] StartCluster complete in 3m55.873905981s
	I0801 17:31:26.010962   29409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:31:26.041109   29409 logs.go:274] 0 containers: []
	W0801 17:31:26.041122   29409 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:31:26.041180   29409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:31:26.069359   29409 logs.go:274] 0 containers: []
	W0801 17:31:26.069372   29409 logs.go:276] No container was found matching "etcd"
	I0801 17:31:26.069428   29409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:31:26.097973   29409 logs.go:274] 0 containers: []
	W0801 17:31:26.097985   29409 logs.go:276] No container was found matching "coredns"
	I0801 17:31:26.098048   29409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:31:26.126294   29409 logs.go:274] 0 containers: []
	W0801 17:31:26.126307   29409 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:31:26.126366   29409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:31:26.155467   29409 logs.go:274] 0 containers: []
	W0801 17:31:26.155480   29409 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:31:26.155537   29409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:31:26.184624   29409 logs.go:274] 0 containers: []
	W0801 17:31:26.184641   29409 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:31:26.184707   29409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:31:26.214097   29409 logs.go:274] 0 containers: []
	W0801 17:31:26.214108   29409 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:31:26.214169   29409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:31:26.242971   29409 logs.go:274] 0 containers: []
	W0801 17:31:26.242983   29409 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:31:26.242995   29409 logs.go:123] Gathering logs for kubelet ...
	I0801 17:31:26.243004   29409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:31:26.281860   29409 logs.go:123] Gathering logs for dmesg ...
	I0801 17:31:26.281875   29409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:31:26.294257   29409 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:31:26.294271   29409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:31:26.351249   29409 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:31:26.351263   29409 logs.go:123] Gathering logs for Docker ...
	I0801 17:31:26.351270   29409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:31:26.367391   29409 logs.go:123] Gathering logs for container status ...
	I0801 17:31:26.367405   29409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:31:28.420802   29409 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053362663s)
	W0801 17:31:28.420955   29409 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 17:31:28.420972   29409 out.go:239] * 
	* 
	W0801 17:31:28.421085   29409 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:31:28.421113   29409 out.go:239] * 
	* 
	W0801 17:31:28.421609   29409 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 17:31:28.485526   29409 out.go:177] 
	W0801 17:31:28.527438   29409 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:31:28.527578   29409 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 17:31:28.527690   29409 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 17:31:28.586394   29409 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220801172716-13911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220801172716-13911
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220801172716-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6",
	        "Created": "2022-08-02T00:27:24.523444703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:27:24.983745258Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hosts",
	        "LogPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6-json.log",
	        "Name": "/old-k8s-version-20220801172716-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220801172716-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220801172716-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220801172716-13911",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220801172716-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220801172716-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ecaf857c193e9b64542789e55830af2f33d370d3507eb712b4c5e2e3a392eee",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50176"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50178"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50179"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ecaf857c193",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220801172716-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dfb69a533565",
	                        "old-k8s-version-20220801172716-13911"
	                    ],
	                    "NetworkID": "947fc21b2e0fc27b09dd4dd43b477927d08a61d441a541fee2a6fa712bca71b9",
	                    "EndpointID": "0db4f4af11e514d62e1769c067e1803817b13c3f9d4a871696f548a9f9ea058b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 6 (448.437009ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:31:29.192531   30150 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220801172716-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220801172716-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (252.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (56.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.106033827s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.106112671s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0801 17:28:31.286573   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.103473448s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.122610259s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.107962456s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.109796218s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.113297122s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (56.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220801172716-13911 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220801172716-13911 create -f testdata/busybox.yaml: exit status 1 (29.839128ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220801172716-13911" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-20220801172716-13911 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220801172716-13911
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220801172716-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6",
	        "Created": "2022-08-02T00:27:24.523444703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:27:24.983745258Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hosts",
	        "LogPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6-json.log",
	        "Name": "/old-k8s-version-20220801172716-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220801172716-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220801172716-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220801172716-13911",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220801172716-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220801172716-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ecaf857c193e9b64542789e55830af2f33d370d3507eb712b4c5e2e3a392eee",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50176"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50178"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50179"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ecaf857c193",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220801172716-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dfb69a533565",
	                        "old-k8s-version-20220801172716-13911"
	                    ],
	                    "NetworkID": "947fc21b2e0fc27b09dd4dd43b477927d08a61d441a541fee2a6fa712bca71b9",
	                    "EndpointID": "0db4f4af11e514d62e1769c067e1803817b13c3f9d4a871696f548a9f9ea058b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 6 (440.200113ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:31:29.736573   30165 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220801172716-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220801172716-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220801172716-13911
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220801172716-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6",
	        "Created": "2022-08-02T00:27:24.523444703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:27:24.983745258Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hosts",
	        "LogPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6-json.log",
	        "Name": "/old-k8s-version-20220801172716-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220801172716-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220801172716-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220801172716-13911",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220801172716-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220801172716-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ecaf857c193e9b64542789e55830af2f33d370d3507eb712b4c5e2e3a392eee",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50176"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50178"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50179"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ecaf857c193",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220801172716-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dfb69a533565",
	                        "old-k8s-version-20220801172716-13911"
	                    ],
	                    "NetworkID": "947fc21b2e0fc27b09dd4dd43b477927d08a61d441a541fee2a6fa712bca71b9",
	                    "EndpointID": "0db4f4af11e514d62e1769c067e1803817b13c3f9d4a871696f548a9f9ea058b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 6 (438.914029ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:31:30.248650   30177 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220801172716-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220801172716-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220801172716-13911 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0801 17:31:35.059821   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:31:35.724539   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:31:38.916362   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:31:40.516981   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:01.275347   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:01.280434   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:01.291023   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:01.311172   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:01.351389   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:01.431661   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:01.592544   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:01.913665   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.137344   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.143770   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.156041   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.178271   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.219130   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.301368   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.463485   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.554102   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:02.784253   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:03.424742   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:03.834551   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:04.704938   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:06.395414   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:07.267156   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:08.203100   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:11.515680   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:12.420403   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:16.685803   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:32:20.551599   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 17:32:21.757916   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:22.661350   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:27.183993   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:32:37.503852   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 17:32:39.139643   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 17:32:42.239399   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:43.143833   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:32:50.321796   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220801172716-13911 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.195835583s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220801172716-13911 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220801172716-13911 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220801172716-13911 describe deploy/metrics-server -n kube-system: exit status 1 (30.70123ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220801172716-13911" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220801172716-13911 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220801172716-13911
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220801172716-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6",
	        "Created": "2022-08-02T00:27:24.523444703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:27:24.983745258Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hosts",
	        "LogPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6-json.log",
	        "Name": "/old-k8s-version-20220801172716-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220801172716-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220801172716-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220801172716-13911",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220801172716-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220801172716-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ecaf857c193e9b64542789e55830af2f33d370d3507eb712b4c5e2e3a392eee",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50176"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50178"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50179"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ecaf857c193",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220801172716-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dfb69a533565",
	                        "old-k8s-version-20220801172716-13911"
	                    ],
	                    "NetworkID": "947fc21b2e0fc27b09dd4dd43b477927d08a61d441a541fee2a6fa712bca71b9",
	                    "EndpointID": "0db4f4af11e514d62e1769c067e1803817b13c3f9d4a871696f548a9f9ea058b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 6 (443.889782ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:32:59.995102   30277 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220801172716-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220801172716-13911" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (492.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220801172716-13911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0801 17:33:04.461878   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:04.467773   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:04.478466   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:04.498652   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:04.539151   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:04.619503   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:04.780198   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:05.101550   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:05.742094   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:07.022222   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:09.582446   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:14.702759   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:18.016761   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:33:23.259888   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:24.105074   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:24.945175   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:33:38.606883   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:33:45.427665   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:34:26.388213   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:34:43.358034   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:34:45.206799   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:34:46.052420   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
E0801 17:35:11.072722   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:35:12.000520   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:35:17.035969   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220801172716-13911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m7.283564095s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220801172716-13911] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220801172716-13911 in cluster old-k8s-version-20220801172716-13911
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220801172716-13911" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 17:33:02.092956   30307 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:33:02.093151   30307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:33:02.093156   30307 out.go:309] Setting ErrFile to fd 2...
	I0801 17:33:02.093160   30307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:33:02.093248   30307 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:33:02.093715   30307 out.go:303] Setting JSON to false
	I0801 17:33:02.108781   30307 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9153,"bootTime":1659391229,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:33:02.108901   30307 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:33:02.131071   30307 out.go:177] * [old-k8s-version-20220801172716-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:33:02.207125   30307 notify.go:193] Checking for updates...
	I0801 17:33:02.227733   30307 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:33:02.269750   30307 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:33:02.311846   30307 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:33:02.354020   30307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:33:02.375064   30307 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:33:02.396274   30307 config.go:180] Loaded profile config "old-k8s-version-20220801172716-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0801 17:33:02.417428   30307 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0801 17:33:02.438938   30307 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:33:02.509086   30307 docker.go:137] docker version: linux-20.10.17
	I0801 17:33:02.509230   30307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:33:02.642340   30307 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:33:02.585183315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:33:02.684700   30307 out.go:177] * Using the docker driver based on existing profile
	I0801 17:33:02.705708   30307 start.go:284] selected driver: docker
	I0801 17:33:02.705726   30307 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:02.705810   30307 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:33:02.707990   30307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:33:02.841272   30307 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:33:02.783411359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:33:02.841425   30307 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:33:02.841442   30307 cni.go:95] Creating CNI manager for ""
	I0801 17:33:02.841454   30307 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:33:02.841463   30307 start_flags.go:310] config:
	{Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:02.863560   30307 out.go:177] * Starting control plane node old-k8s-version-20220801172716-13911 in cluster old-k8s-version-20220801172716-13911
	I0801 17:33:02.901007   30307 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:33:02.922018   30307 out.go:177] * Pulling base image ...
	I0801 17:33:02.994914   30307 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:33:02.994956   30307 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:33:02.995023   30307 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0801 17:33:02.995060   30307 cache.go:57] Caching tarball of preloaded images
	I0801 17:33:02.995280   30307 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:33:02.995300   30307 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0801 17:33:02.996429   30307 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json ...
	I0801 17:33:03.060663   30307 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:33:03.060678   30307 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:33:03.060689   30307 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:33:03.060733   30307 start.go:371] acquiring machines lock for old-k8s-version-20220801172716-13911: {Name:mkbe9b0aeba6b12111b317502f6798dbe4170df1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:33:03.060814   30307 start.go:375] acquired machines lock for "old-k8s-version-20220801172716-13911" in 58.105µs
	I0801 17:33:03.060833   30307 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:33:03.060843   30307 fix.go:55] fixHost starting: 
	I0801 17:33:03.061068   30307 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:33:03.128234   30307 fix.go:103] recreateIfNeeded on old-k8s-version-20220801172716-13911: state=Stopped err=<nil>
	W0801 17:33:03.128265   30307 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:33:03.171939   30307 out.go:177] * Restarting existing docker container for "old-k8s-version-20220801172716-13911" ...
	I0801 17:33:03.192980   30307 cli_runner.go:164] Run: docker start old-k8s-version-20220801172716-13911
	I0801 17:33:03.538000   30307 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:33:03.611055   30307 kic.go:415] container "old-k8s-version-20220801172716-13911" state is running.
	I0801 17:33:03.611725   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:03.686263   30307 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json ...
	I0801 17:33:03.686646   30307 machine.go:88] provisioning docker machine ...
	I0801 17:33:03.686671   30307 ubuntu.go:169] provisioning hostname "old-k8s-version-20220801172716-13911"
	I0801 17:33:03.686737   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:03.759719   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:03.759935   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:03.759949   30307 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220801172716-13911 && echo "old-k8s-version-20220801172716-13911" | sudo tee /etc/hostname
	I0801 17:33:03.881107   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220801172716-13911
	
	I0801 17:33:03.881202   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:03.953049   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:03.953193   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:03.953209   30307 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220801172716-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220801172716-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220801172716-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:33:04.068209   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:33:04.068228   30307 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:33:04.068250   30307 ubuntu.go:177] setting up certificates
	I0801 17:33:04.068257   30307 provision.go:83] configureAuth start
	I0801 17:33:04.068317   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:04.140299   30307 provision.go:138] copyHostCerts
	I0801 17:33:04.140379   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:33:04.140388   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:33:04.140472   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:33:04.140693   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:33:04.140702   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:33:04.140790   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:33:04.140960   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:33:04.140968   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:33:04.141026   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:33:04.141173   30307 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220801172716-13911 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220801172716-13911]
	I0801 17:33:04.220622   30307 provision.go:172] copyRemoteCerts
	I0801 17:33:04.220690   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:33:04.220732   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.292178   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:04.375104   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:33:04.392099   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0801 17:33:04.410165   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:33:04.426562   30307 provision.go:86] duration metric: configureAuth took 358.288794ms
	I0801 17:33:04.426574   30307 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:33:04.426746   30307 config.go:180] Loaded profile config "old-k8s-version-20220801172716-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0801 17:33:04.426801   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.497954   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.498129   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.498141   30307 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:33:04.611392   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:33:04.611410   30307 ubuntu.go:71] root file system type: overlay
	I0801 17:33:04.611545   30307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:33:04.611619   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.683157   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.683304   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.683371   30307 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:33:04.808590   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:33:04.808679   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.879830   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.879994   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.880012   30307 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:33:04.997035   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:33:04.997049   30307 machine.go:91] provisioned docker machine in 1.310380032s
	I0801 17:33:04.997056   30307 start.go:307] post-start starting for "old-k8s-version-20220801172716-13911" (driver="docker")
	I0801 17:33:04.997074   30307 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:33:04.997144   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:33:04.997190   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.069168   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.153399   30307 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:33:05.157021   30307 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:33:05.157038   30307 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:33:05.157045   30307 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:33:05.157050   30307 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:33:05.157058   30307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:33:05.157159   30307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:33:05.157296   30307 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:33:05.157452   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:33:05.164984   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:33:05.182269   30307 start.go:310] post-start completed in 185.186568ms
	I0801 17:33:05.182349   30307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:33:05.182412   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.253249   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.336526   30307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:33:05.340949   30307 fix.go:57] fixHost completed within 2.280081452s
	I0801 17:33:05.340961   30307 start.go:82] releasing machines lock for "old-k8s-version-20220801172716-13911", held for 2.280115227s
	I0801 17:33:05.341031   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:05.411603   30307 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:33:05.411607   30307 ssh_runner.go:195] Run: systemctl --version
	I0801 17:33:05.411671   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.411689   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.488484   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.490663   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.760297   30307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:33:05.770249   30307 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:33:05.770315   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:33:05.781723   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:33:05.794766   30307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:33:05.869802   30307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:33:05.934941   30307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:33:06.019332   30307 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:33:06.228189   30307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:33:06.267803   30307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:33:06.346695   30307 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0801 17:33:06.346845   30307 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220801172716-13911 dig +short host.docker.internal
	I0801 17:33:06.475760   30307 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:33:06.475854   30307 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:33:06.480076   30307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:33:06.489496   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:06.561364   30307 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:33:06.561454   30307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:33:06.592913   30307 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:33:06.592929   30307 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:33:06.593009   30307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:33:06.623551   30307 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:33:06.623571   30307 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:33:06.623646   30307 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:33:06.699039   30307 cni.go:95] Creating CNI manager for ""
	I0801 17:33:06.699060   30307 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:33:06.699074   30307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:33:06.699090   30307 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220801172716-13911 NodeName:old-k8s-version-20220801172716-13911 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:33:06.699238   30307 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220801172716-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220801172716-13911
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:33:06.699312   30307 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220801172716-13911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:33:06.699380   30307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0801 17:33:06.706617   30307 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:33:06.706669   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:33:06.713640   30307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0801 17:33:06.727903   30307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:33:06.740699   30307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0801 17:33:06.754028   30307 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:33:06.757691   30307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:33:06.767564   30307 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911 for IP: 192.168.76.2
	I0801 17:33:06.767666   30307 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:33:06.767715   30307 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:33:06.767802   30307 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.key
	I0801 17:33:06.767861   30307 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key.31bdca25
	I0801 17:33:06.767909   30307 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key
	I0801 17:33:06.768129   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:33:06.768165   30307 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:33:06.768179   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:33:06.768215   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:33:06.768244   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:33:06.768273   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:33:06.768343   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:33:06.770066   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:33:06.786809   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:33:06.803930   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:33:06.820293   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:33:06.836640   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:33:06.853270   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:33:06.869959   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:33:06.886388   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:33:06.903049   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:33:06.920046   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:33:06.936329   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:33:06.953108   30307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:33:06.965417   30307 ssh_runner.go:195] Run: openssl version
	I0801 17:33:06.970864   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:33:06.979779   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.983543   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.983586   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.988888   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:33:06.995997   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:33:07.003729   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.007447   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.007493   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.012803   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:33:07.020845   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:33:07.028574   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.032339   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.032378   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.037622   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:33:07.044888   30307 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:07.044982   30307 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:33:07.073047   30307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:33:07.080535   30307 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:33:07.080556   30307 kubeadm.go:626] restartCluster start
	I0801 17:33:07.080608   30307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:33:07.087807   30307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.087873   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:07.161019   30307 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220801172716-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:33:07.161188   30307 kubeconfig.go:127] "old-k8s-version-20220801172716-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:33:07.161555   30307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:33:07.162658   30307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:33:07.170122   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.170170   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.178204   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.378464   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.378560   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.388766   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.579693   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.579819   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.590131   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.780063   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.780238   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.791267   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.978733   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.978885   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.988977   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.178638   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.178717   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.187944   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.378810   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.378930   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.389502   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.578776   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.578955   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.589682   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.778805   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.778941   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.790788   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.980073   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.980189   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.990770   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.178462   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.178599   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.188930   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.378914   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.379012   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.389506   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.580573   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.580704   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.591607   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.780347   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.780485   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.790994   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.978646   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.978775   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.989169   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.178855   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:10.178968   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:10.187897   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.187907   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:10.187955   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:10.195605   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.195617   30307 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:33:10.195625   30307 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:33:10.195675   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:33:10.224715   30307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:33:10.234985   30307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:33:10.242805   30307 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Aug  2 00:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5775 Aug  2 00:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Aug  2 00:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Aug  2 00:29 /etc/kubernetes/scheduler.conf
	
	I0801 17:33:10.242857   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:33:10.250643   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:33:10.258189   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:33:10.266321   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:33:10.273876   30307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:33:10.281390   30307 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:33:10.281402   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:10.329953   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.032947   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.233358   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.290594   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.342083   30307 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:33:11.342142   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:11.851910   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:12.351846   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:12.851217   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:13.353310   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:13.851936   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:14.353088   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:14.853235   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:15.353275   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:15.852184   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:16.353240   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:16.853252   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:17.353304   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:17.853335   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:18.351214   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:18.851526   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:19.351430   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:19.853261   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:20.352524   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:20.851275   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:21.352561   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:21.851472   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:22.351688   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:22.851332   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:23.351357   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:23.851974   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:24.353354   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:24.851825   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:25.353110   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:25.851764   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:26.351912   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:26.851768   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:27.351519   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:27.851289   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:28.351671   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:28.851467   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:29.351418   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:29.851312   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:30.351309   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:30.851712   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:31.353119   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:31.852333   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:32.351358   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:32.851965   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:33.351587   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:33.852401   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:34.351610   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:34.851477   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:35.351739   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:35.852236   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:36.351836   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:36.852166   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:37.351461   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:37.852701   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:38.351889   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:38.853136   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:39.353555   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:39.851668   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:40.351742   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:40.852690   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:41.351542   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:41.851651   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:42.351647   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:42.852217   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:43.352460   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:43.851462   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:44.351520   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:44.851542   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:45.352287   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:45.851529   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:46.351462   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:46.853011   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:47.353014   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:47.852957   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:48.351794   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:48.851608   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:49.353132   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:49.852861   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:50.351559   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:50.851826   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:51.351605   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:51.852394   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:52.351865   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:52.852613   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:53.352321   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:53.851626   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:54.351598   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:54.851666   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:55.351623   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:55.851667   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:56.351631   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:56.851992   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:57.351708   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:57.851772   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:58.351628   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:58.851633   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:59.352270   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:59.851588   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:00.351911   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:00.852107   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:01.352190   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:01.851781   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:02.352022   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:02.853040   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:03.352607   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:03.852400   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:04.351810   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:04.851747   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:05.351908   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:05.851982   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:06.353234   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:06.851753   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:07.351805   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:07.851765   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:08.353881   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:08.852724   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:09.351746   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:09.853807   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:10.353834   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:10.853159   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:11.352358   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:11.383418   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.383432   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:11.383494   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:11.413072   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.413084   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:11.413142   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:11.442218   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.442230   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:11.442288   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:11.470969   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.470982   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:11.471044   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:11.500295   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.500308   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:11.500367   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:11.533285   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.533298   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:11.533358   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:11.563355   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.563367   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:11.563427   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:11.592445   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.592456   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:11.592479   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:11.592488   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:11.632510   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:11.632522   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:11.644313   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:11.644327   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:11.695794   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:11.695809   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:11.695815   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:11.709396   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:11.709407   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:13.763461   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054019747s)
	I0801 17:34:16.264200   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:16.353932   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:16.385118   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.385130   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:16.385190   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:16.414517   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.414529   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:16.414588   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:16.443356   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.443369   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:16.443435   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:16.477272   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.477285   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:16.477348   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:16.510936   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.510949   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:16.511011   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:16.547639   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.547652   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:16.547713   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:16.578107   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.578119   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:16.578177   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:16.607309   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.607323   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:16.607331   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:16.607339   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:16.645996   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:16.646009   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:16.657128   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:16.657141   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:16.709161   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:16.709176   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:16.709182   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:16.722936   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:16.722954   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:18.775009   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052014549s)
	I0801 17:34:21.277564   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:21.354038   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:21.385924   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.385936   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:21.385997   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:21.414350   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.414362   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:21.414418   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:21.444094   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.444107   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:21.444162   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:21.472715   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.472727   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:21.472784   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:21.501199   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.501211   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:21.501288   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:21.534002   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.534016   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:21.534092   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:21.564027   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.564039   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:21.564098   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:21.593121   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.593134   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:21.593143   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:21.593150   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:21.633306   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:21.633320   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:21.645837   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:21.645850   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:21.700543   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:21.700560   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:21.700567   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:21.714946   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:21.714960   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:23.771704   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056708133s)
	I0801 17:34:26.272261   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:26.353456   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:26.386051   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.386063   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:26.386119   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:26.415224   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.415236   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:26.415298   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:26.445222   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.445235   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:26.445292   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:26.475024   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.475037   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:26.475097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:26.505006   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.505019   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:26.505077   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:26.542252   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.542265   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:26.542323   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:26.572302   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.572315   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:26.572374   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:26.601432   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.601445   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:26.601452   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:26.601459   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:26.615447   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:26.615459   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:28.668228   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052734957s)
	I0801 17:34:28.668338   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:28.668347   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:28.707285   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:28.707298   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:28.718726   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:28.718739   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:28.769688   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:31.270139   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:31.352538   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:31.382379   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.382397   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:31.382466   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:31.414167   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.414180   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:31.414250   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:31.447114   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.447129   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:31.447197   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:31.478169   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.478183   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:31.478244   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:31.508755   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.508767   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:31.508826   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:31.541935   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.541949   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:31.542012   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:31.573200   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.573213   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:31.573271   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:31.601641   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.601654   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:31.601661   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:31.601670   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:31.615421   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:31.615434   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:33.667553   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052084288s)
	I0801 17:34:33.667661   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:33.667671   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:33.708058   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:33.708075   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:33.721159   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:33.721175   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:33.773936   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:36.278098   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:36.358158   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:36.389134   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.389146   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:36.389206   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:36.418282   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.418294   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:36.418350   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:36.448321   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.448333   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:36.448391   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:36.477122   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.477138   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:36.477204   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:36.506036   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.506048   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:36.506118   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:36.550984   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.550998   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:36.551060   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:36.579712   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.579725   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:36.579788   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:36.608681   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.608692   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:36.608699   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:36.608706   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:36.648271   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:36.648288   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:36.661072   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:36.661086   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:36.717917   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:36.717928   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:36.717936   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:36.732109   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:36.732124   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:38.791203   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053533989s)
	I0801 17:34:41.297687   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:41.369319   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:41.400112   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.400125   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:41.400185   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:41.429000   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.429013   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:41.429077   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:41.457782   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.457794   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:41.457850   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:41.489550   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.489562   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:41.489622   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:41.518587   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.518600   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:41.518658   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:41.549089   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.549101   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:41.549167   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:41.578870   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.578885   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:41.578945   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:41.608653   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.608664   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:41.608671   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:41.608677   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:41.620204   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:41.620216   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:41.673763   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:41.673777   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:41.673784   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:41.688084   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:41.688096   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:43.745846   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053708064s)
	I0801 17:34:43.745957   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:43.745964   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:46.290648   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:46.378519   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:46.409191   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.409203   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:46.409260   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:46.438190   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.438201   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:46.438263   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:46.470731   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.470743   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:46.470802   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:46.502588   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.502599   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:46.502655   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:46.531976   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.531988   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:46.532047   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:46.566132   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.566145   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:46.566203   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:46.600014   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.600027   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:46.600083   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:46.629125   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.629137   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:46.629144   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:46.629152   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:46.670158   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:46.670172   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:46.681911   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:46.681922   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:46.735993   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:46.736003   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:46.736010   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:46.750833   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:46.750849   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:48.809538   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055753562s)
	I0801 17:34:51.313270   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:51.384934   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:51.414232   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.414250   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:51.414304   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:51.441881   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.441894   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:51.441954   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:51.470802   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.470813   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:51.470866   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:51.499238   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.499252   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:51.499316   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:51.527042   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.527055   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:51.527112   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:51.556456   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.556473   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:51.556541   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:51.585716   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.585728   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:51.585797   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:51.615551   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.615565   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:51.615572   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:51.615580   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:53.671946   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054212993s)
	I0801 17:34:53.672054   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:53.672061   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:53.714018   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:53.714031   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:53.725408   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:53.725422   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:53.778549   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:53.778560   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:53.778567   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:56.295298   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:56.390271   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:56.420485   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.420497   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:56.420554   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:56.449383   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.449397   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:56.449453   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:56.478432   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.478444   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:56.478500   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:56.506950   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.506962   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:56.507014   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:56.536393   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.536404   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:56.536463   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:56.565436   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.565449   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:56.565506   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:56.593950   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.593963   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:56.594019   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:56.621932   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.621945   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:56.621953   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:56.621960   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:56.663174   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:56.663190   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:56.675466   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:56.675478   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:56.736252   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:56.736265   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:56.736272   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:56.751881   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:56.751896   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:58.810799   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057341572s)
	I0801 17:35:01.313979   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:01.394957   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:01.429935   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.429948   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:01.430007   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:01.458854   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.458869   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:01.458940   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:01.489769   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.489781   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:01.489839   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:01.522081   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.522092   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:01.522152   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:01.552276   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.552288   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:01.552347   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:01.581231   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.581242   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:01.581303   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:01.610456   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.610468   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:01.610527   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:01.640825   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.640838   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:01.640845   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:01.640851   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:01.681164   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:01.681182   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:01.693005   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:01.693020   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:01.745760   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:01.745779   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:01.745785   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:01.760279   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:01.760291   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:03.814149   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052717763s)
	I0801 17:35:06.317273   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:06.397453   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:06.431739   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.431750   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:06.431808   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:06.460085   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.460096   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:06.460155   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:06.490788   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.490801   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:06.490865   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:06.521225   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.521238   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:06.521296   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:06.551676   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.551690   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:06.551748   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:06.581891   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.581903   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:06.581967   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:06.610415   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.610428   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:06.610487   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:06.638868   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.638881   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:06.638888   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:06.638896   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:06.677340   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:06.677355   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:06.689281   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:06.689296   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:06.741694   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:06.741718   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:06.741724   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:06.757440   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:06.757454   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:08.810862   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052570896s)
	I0801 17:35:11.312050   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:11.397246   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:11.438278   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.438296   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:11.438374   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:11.469285   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.469299   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:11.469369   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:11.506443   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.506454   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:11.506511   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:11.550600   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.550618   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:11.550696   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:11.587813   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.587828   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:11.587900   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:11.616041   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.616053   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:11.616109   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:11.656883   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.656898   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:11.656974   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:11.687937   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.687953   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:11.687962   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:11.687971   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:11.730338   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:11.730358   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:11.742630   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:11.742643   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:11.795410   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:11.795421   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:11.795429   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:11.809830   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:11.809843   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:13.874861   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064395737s)
	I0801 17:35:16.376183   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:16.399312   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:16.430262   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.430275   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:16.430337   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:16.460017   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.460034   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:16.460093   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:16.491848   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.491860   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:16.491920   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:16.521940   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.521955   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:16.522015   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:16.551494   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.551507   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:16.551567   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:16.582166   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.582182   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:16.582246   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:16.613564   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.613577   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:16.613646   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:16.642889   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.642902   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:16.642909   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:16.642916   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:16.705324   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:16.705334   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:16.705340   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:16.719372   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:16.719385   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:18.776640   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056791977s)
	I0801 17:35:18.776769   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:18.776779   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:18.825208   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:18.825237   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:21.339622   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:21.400203   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:21.433513   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.433525   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:21.433585   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:21.479281   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.479293   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:21.479351   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:21.528053   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.528075   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:21.528152   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:21.570823   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.570842   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:21.570914   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:21.622051   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.622066   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:21.622120   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:21.662421   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.662433   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:21.662494   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:21.700986   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.701004   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:21.701071   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:21.761715   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.761733   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:21.761744   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:21.761754   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:21.812508   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:21.812527   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:21.829925   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:21.829963   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:21.894716   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:21.894731   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:21.894740   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:21.915852   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:21.915872   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:23.988264   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.072039463s)
	I0801 17:35:26.488923   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:26.902539   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:26.934000   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.934013   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:26.934097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:26.962321   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.962333   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:26.962392   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:26.991695   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.991707   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:26.991767   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:27.019837   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.019849   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:27.019909   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:27.049346   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.049358   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:27.049416   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:27.078615   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.078626   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:27.078682   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:27.107692   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.107705   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:27.107764   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:27.135696   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.135711   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:27.135718   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:27.135726   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:27.179734   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:27.179751   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:27.192465   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:27.192482   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:27.246895   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:27.246908   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:27.246915   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:27.260599   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:27.260611   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:29.314532   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053665084s)
	I0801 17:35:31.815083   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:31.903100   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:31.934197   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.934208   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:31.934264   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:31.963017   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.963028   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:31.963086   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:31.993025   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.993039   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:31.993098   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:32.022103   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.022116   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:32.022174   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:32.051243   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.051255   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:32.051310   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:32.081226   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.081238   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:32.081294   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:32.110522   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.110535   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:32.110593   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:32.139913   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.139927   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:32.139935   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:32.139943   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:32.181780   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:32.181796   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:32.194244   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:32.194258   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:32.244454   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:32.244465   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:32.244472   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:32.258059   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:32.258071   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:34.313901   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05563403s)
	I0801 17:35:36.814353   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:36.902028   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:36.932525   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.932537   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:36.932595   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:36.965941   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.965952   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:36.966010   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:36.997194   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.997206   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:36.997265   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:37.027992   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.028004   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:37.028058   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:37.057894   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.057906   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:37.057963   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:37.091455   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.091467   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:37.091527   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:37.127099   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.127112   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:37.127168   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:37.164814   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.216228   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:37.216316   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:37.216333   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:37.259473   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:37.259490   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:37.271319   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:37.271338   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:37.326930   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:37.326944   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:37.326956   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:37.342336   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:37.342350   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:39.395576   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053071303s)
	I0801 17:35:41.898084   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:42.402026   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:42.434886   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.434900   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:42.434955   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:42.464377   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.464389   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:42.464445   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:42.492747   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.492759   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:42.492818   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:42.521139   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.521153   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:42.521209   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:42.550296   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.550307   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:42.550363   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:42.579268   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.579281   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:42.579338   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:42.608287   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.608299   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:42.608352   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:42.637135   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.637150   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:42.637163   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:42.637175   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:42.650659   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:42.650670   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:44.706919   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056125086s)
	I0801 17:35:44.707024   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:44.707030   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:44.746683   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:44.746696   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:44.757796   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:44.757808   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:44.810488   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:47.311546   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:47.402199   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:47.431779   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.431796   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:47.431878   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:47.462490   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.462504   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:47.462563   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:47.491434   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.491447   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:47.491504   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:47.520881   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.520894   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:47.520968   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:47.550517   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.550529   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:47.550584   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:47.580190   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.580205   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:47.580261   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:47.608687   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.608698   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:47.608757   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:47.638031   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.638044   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:47.638051   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:47.638057   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:47.649363   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:47.649376   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:47.701537   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:47.701547   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:47.701554   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:47.714906   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:47.714918   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:49.767687   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052668978s)
	I0801 17:35:49.767793   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:49.767799   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:52.306896   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:52.404357   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:52.435516   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.435528   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:52.435587   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:52.466505   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.466517   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:52.466576   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:52.495280   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.495292   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:52.495351   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:52.523452   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.523464   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:52.523522   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:52.552296   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.552308   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:52.552367   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:52.582614   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.582628   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:52.582686   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:52.611494   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.611510   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:52.611571   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:52.643062   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.643073   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:52.643081   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:52.643088   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:52.683875   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:52.683894   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:52.696292   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:52.696306   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:52.751367   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:52.751385   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:52.751398   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:52.764882   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:52.764895   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:54.823481   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058501379s)
	I0801 17:35:57.325623   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:57.404554   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:57.435795   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.435806   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:57.435864   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:57.464534   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.464547   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:57.464609   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:57.493563   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.493576   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:57.493631   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:57.521806   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.521818   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:57.521876   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:57.550038   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.550052   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:57.550128   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:57.584225   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.584251   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:57.584312   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:57.613276   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.613289   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:57.613348   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:57.641915   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.641927   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:57.641934   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:57.641942   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:57.681293   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:57.681305   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:57.692507   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:57.692519   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:57.744366   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:57.744377   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:57.744384   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:57.758258   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:57.758270   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:59.813771   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055426001s)
	I0801 17:36:02.314786   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:02.403097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:02.432291   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.432303   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:02.432366   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:02.462408   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.462420   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:02.462478   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:02.491149   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.491167   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:02.491224   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:02.519302   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.519315   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:02.519372   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:02.548267   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.548281   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:02.548342   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:02.576524   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.576538   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:02.576595   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:02.605216   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.605228   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:02.605287   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:02.634873   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.634885   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:02.634892   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:02.634902   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:02.648952   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:02.648965   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:04.701091   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052060777s)
	I0801 17:36:04.701205   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:04.701212   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:04.740173   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:04.740190   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:04.751825   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:04.751838   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:04.803705   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:07.306022   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:07.404847   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:07.435505   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.435517   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:07.435573   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:07.463625   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.463637   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:07.463694   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:07.491535   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.491547   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:07.491610   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:07.520843   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.520855   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:07.520914   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:07.549909   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.549922   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:07.549979   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:07.578735   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.578749   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:07.578812   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:07.609291   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.609304   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:07.609360   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:07.638717   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.638731   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:07.638739   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:07.638746   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:07.650180   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:07.650194   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:07.708994   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:07.709004   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:07.709011   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:07.722398   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:07.722410   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:09.776740   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054270118s)
	I0801 17:36:09.776854   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:09.776862   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:12.317307   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:12.402938   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:12.442523   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.442537   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:12.442601   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:12.470774   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.470787   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:12.470855   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:12.508498   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.508512   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:12.508573   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:12.542161   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.542174   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:12.542230   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:12.570767   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.570782   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:12.570844   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:12.610930   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.610948   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:12.610993   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:12.647001   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.647013   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:12.647065   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:12.688997   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.689014   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:12.689023   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:12.689035   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:12.739740   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:12.739761   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:12.753725   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:12.753746   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:12.840901   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:12.840915   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:12.840923   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:12.855530   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:12.855545   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:14.911131   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055529458s)
	I0801 17:36:17.411482   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:17.903846   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:17.936167   30307 logs.go:274] 0 containers: []
	W0801 17:36:17.936179   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:17.936242   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:17.968428   30307 logs.go:274] 0 containers: []
	W0801 17:36:17.968440   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:17.968497   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:17.999646   30307 logs.go:274] 0 containers: []
	W0801 17:36:17.999656   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:17.999699   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:18.035584   30307 logs.go:274] 0 containers: []
	W0801 17:36:18.035596   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:18.035659   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:18.068540   30307 logs.go:274] 0 containers: []
	W0801 17:36:18.068553   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:18.068613   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:18.099474   30307 logs.go:274] 0 containers: []
	W0801 17:36:18.099488   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:18.099549   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:18.131024   30307 logs.go:274] 0 containers: []
	W0801 17:36:18.131039   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:18.131101   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:18.165392   30307 logs.go:274] 0 containers: []
	W0801 17:36:18.165405   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:18.165411   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:18.165419   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:18.178438   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:18.178454   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:18.242460   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:18.242473   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:18.242481   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:18.258079   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:18.258092   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:20.309135   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05099152s)
	I0801 17:36:20.309246   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:20.309253   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:22.849082   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:22.903018   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:22.933182   30307 logs.go:274] 0 containers: []
	W0801 17:36:22.933194   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:22.933249   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:22.962093   30307 logs.go:274] 0 containers: []
	W0801 17:36:22.962105   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:22.962163   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:23.006930   30307 logs.go:274] 0 containers: []
	W0801 17:36:23.006949   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:23.007060   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:23.037533   30307 logs.go:274] 0 containers: []
	W0801 17:36:23.037545   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:23.037608   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:23.068107   30307 logs.go:274] 0 containers: []
	W0801 17:36:23.068119   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:23.068175   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:23.108907   30307 logs.go:274] 0 containers: []
	W0801 17:36:23.108923   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:23.108992   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:23.138491   30307 logs.go:274] 0 containers: []
	W0801 17:36:23.138503   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:23.138559   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:23.171131   30307 logs.go:274] 0 containers: []
	W0801 17:36:23.171142   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:23.171149   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:23.171156   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:23.228082   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:23.228101   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:23.241181   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:23.241196   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:23.296237   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:23.296254   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:23.296261   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:23.309884   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:23.309897   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:25.361400   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05145407s)
	I0801 17:36:27.861766   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:27.903252   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:27.941966   30307 logs.go:274] 0 containers: []
	W0801 17:36:27.941978   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:27.942036   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:27.974488   30307 logs.go:274] 0 containers: []
	W0801 17:36:27.974500   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:27.974557   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:28.009823   30307 logs.go:274] 0 containers: []
	W0801 17:36:28.009836   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:28.009893   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:28.040633   30307 logs.go:274] 0 containers: []
	W0801 17:36:28.040651   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:28.040725   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:28.071415   30307 logs.go:274] 0 containers: []
	W0801 17:36:28.071428   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:28.071484   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:28.103605   30307 logs.go:274] 0 containers: []
	W0801 17:36:28.103617   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:28.103676   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:28.134469   30307 logs.go:274] 0 containers: []
	W0801 17:36:28.134483   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:28.134540   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:28.167786   30307 logs.go:274] 0 containers: []
	W0801 17:36:28.167799   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:28.167808   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:28.167815   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:30.224504   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056639702s)
	I0801 17:36:30.224613   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:30.224624   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:30.267549   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:30.267564   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:30.279888   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:30.279902   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:30.336188   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:30.336199   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:30.336205   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:32.850738   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:32.904711   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:32.933519   30307 logs.go:274] 0 containers: []
	W0801 17:36:32.933537   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:32.933607   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:32.962685   30307 logs.go:274] 0 containers: []
	W0801 17:36:32.962697   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:32.962759   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:32.992320   30307 logs.go:274] 0 containers: []
	W0801 17:36:32.992332   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:32.992395   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:33.022484   30307 logs.go:274] 0 containers: []
	W0801 17:36:33.022502   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:33.022572   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:33.052885   30307 logs.go:274] 0 containers: []
	W0801 17:36:33.052898   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:33.052963   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:33.098905   30307 logs.go:274] 0 containers: []
	W0801 17:36:33.098921   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:33.098990   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:33.139085   30307 logs.go:274] 0 containers: []
	W0801 17:36:33.139100   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:33.139202   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:33.171341   30307 logs.go:274] 0 containers: []
	W0801 17:36:33.171354   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:33.171362   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:33.171371   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:33.215787   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:33.215804   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:33.227825   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:33.227837   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:33.299858   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:33.299871   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:33.299877   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:33.314122   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:33.314137   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:35.410414   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.096228325s)
	I0801 17:36:37.910827   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:38.403243   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:38.452974   30307 logs.go:274] 0 containers: []
	W0801 17:36:38.452989   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:38.453051   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:38.501069   30307 logs.go:274] 0 containers: []
	W0801 17:36:38.501086   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:38.501148   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:38.532474   30307 logs.go:274] 0 containers: []
	W0801 17:36:38.532490   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:38.532557   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:38.564804   30307 logs.go:274] 0 containers: []
	W0801 17:36:38.564816   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:38.564872   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:38.595705   30307 logs.go:274] 0 containers: []
	W0801 17:36:38.595719   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:38.595785   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:38.629657   30307 logs.go:274] 0 containers: []
	W0801 17:36:38.629672   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:38.629735   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:38.665020   30307 logs.go:274] 0 containers: []
	W0801 17:36:38.665033   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:38.665097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:38.707000   30307 logs.go:274] 0 containers: []
	W0801 17:36:38.707013   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:38.707022   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:38.707029   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:38.754963   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:38.754979   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:38.767357   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:38.767370   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:38.821602   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:38.821615   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:38.821621   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:38.835944   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:38.835956   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:40.890439   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054432988s)
	I0801 17:36:43.391047   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:43.403301   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:43.432166   30307 logs.go:274] 0 containers: []
	W0801 17:36:43.432178   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:43.432238   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:43.463131   30307 logs.go:274] 0 containers: []
	W0801 17:36:43.463144   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:43.463203   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:43.492655   30307 logs.go:274] 0 containers: []
	W0801 17:36:43.492667   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:43.492729   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:43.522515   30307 logs.go:274] 0 containers: []
	W0801 17:36:43.522527   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:43.522605   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:43.551925   30307 logs.go:274] 0 containers: []
	W0801 17:36:43.551955   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:43.552039   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:43.584501   30307 logs.go:274] 0 containers: []
	W0801 17:36:43.584514   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:43.584583   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:43.614275   30307 logs.go:274] 0 containers: []
	W0801 17:36:43.614290   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:43.614357   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:43.652389   30307 logs.go:274] 0 containers: []
	W0801 17:36:43.652401   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:43.652408   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:43.652415   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:43.695767   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:43.695784   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:43.708809   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:43.708823   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:43.766219   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:43.766230   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:43.766241   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:43.781602   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:43.781615   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:45.840595   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058934227s)
	I0801 17:36:48.341035   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:48.403458   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:48.440264   30307 logs.go:274] 0 containers: []
	W0801 17:36:48.440279   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:48.440349   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:48.472191   30307 logs.go:274] 0 containers: []
	W0801 17:36:48.472203   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:48.472260   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:48.502053   30307 logs.go:274] 0 containers: []
	W0801 17:36:48.502065   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:48.502121   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:48.532692   30307 logs.go:274] 0 containers: []
	W0801 17:36:48.532707   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:48.532771   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:48.565479   30307 logs.go:274] 0 containers: []
	W0801 17:36:48.565492   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:48.565554   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:48.596355   30307 logs.go:274] 0 containers: []
	W0801 17:36:48.596370   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:48.596429   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:48.628477   30307 logs.go:274] 0 containers: []
	W0801 17:36:48.628489   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:48.628549   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:48.659749   30307 logs.go:274] 0 containers: []
	W0801 17:36:48.659764   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:48.659774   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:48.659787   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:48.704541   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:48.704555   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:48.716607   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:48.716621   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:48.773685   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:48.773695   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:48.773702   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:48.788250   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:48.788263   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:50.842020   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053711195s)
	I0801 17:36:53.342680   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:53.405185   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:53.435214   30307 logs.go:274] 0 containers: []
	W0801 17:36:53.435231   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:53.435297   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:53.464168   30307 logs.go:274] 0 containers: []
	W0801 17:36:53.464181   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:53.464269   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:53.493238   30307 logs.go:274] 0 containers: []
	W0801 17:36:53.493250   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:53.493307   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:53.521684   30307 logs.go:274] 0 containers: []
	W0801 17:36:53.521695   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:53.521754   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:53.550412   30307 logs.go:274] 0 containers: []
	W0801 17:36:53.550425   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:53.550485   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:53.578394   30307 logs.go:274] 0 containers: []
	W0801 17:36:53.578409   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:53.578467   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:53.607175   30307 logs.go:274] 0 containers: []
	W0801 17:36:53.607192   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:53.607249   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:53.638154   30307 logs.go:274] 0 containers: []
	W0801 17:36:53.638173   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:53.638183   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:53.638192   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:53.680595   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:53.680616   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:53.692771   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:53.692784   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:53.745457   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:53.745467   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:53.745473   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:53.759026   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:53.759040   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:55.817685   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058600135s)
	I0801 17:36:58.319321   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:58.403688   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:58.433967   30307 logs.go:274] 0 containers: []
	W0801 17:36:58.433980   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:58.434045   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:58.464868   30307 logs.go:274] 0 containers: []
	W0801 17:36:58.464883   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:58.464945   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:58.496484   30307 logs.go:274] 0 containers: []
	W0801 17:36:58.496497   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:58.496554   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:58.524949   30307 logs.go:274] 0 containers: []
	W0801 17:36:58.524964   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:58.525028   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:58.556852   30307 logs.go:274] 0 containers: []
	W0801 17:36:58.556865   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:58.556945   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:58.595703   30307 logs.go:274] 0 containers: []
	W0801 17:36:58.595728   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:58.595796   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:58.628670   30307 logs.go:274] 0 containers: []
	W0801 17:36:58.628684   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:58.628751   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:58.658421   30307 logs.go:274] 0 containers: []
	W0801 17:36:58.658435   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:58.658443   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:58.658450   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:58.670628   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:58.670643   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:58.728818   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:58.728830   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:58.728837   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:58.745367   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:58.745381   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:37:00.799901   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054473023s)
	I0801 17:37:00.800024   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:37:00.800032   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:37:03.350299   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:03.403936   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:37:03.435664   30307 logs.go:274] 0 containers: []
	W0801 17:37:03.435676   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:37:03.435736   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:37:03.464107   30307 logs.go:274] 0 containers: []
	W0801 17:37:03.464120   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:37:03.464179   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:37:03.493627   30307 logs.go:274] 0 containers: []
	W0801 17:37:03.493640   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:37:03.493695   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:37:03.522445   30307 logs.go:274] 0 containers: []
	W0801 17:37:03.522459   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:37:03.522517   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:37:03.551041   30307 logs.go:274] 0 containers: []
	W0801 17:37:03.551053   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:37:03.551126   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:37:03.586337   30307 logs.go:274] 0 containers: []
	W0801 17:37:03.586350   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:37:03.586407   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:37:03.615150   30307 logs.go:274] 0 containers: []
	W0801 17:37:03.615165   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:37:03.615231   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:37:03.649965   30307 logs.go:274] 0 containers: []
	W0801 17:37:03.649977   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:37:03.649985   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:37:03.649991   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:37:03.663965   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:37:03.663977   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:37:05.717550   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0535288s)
	I0801 17:37:05.717656   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:37:05.717663   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:37:05.758467   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:37:05.758482   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:37:05.771787   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:37:05.771802   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:37:05.825518   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:37:08.326566   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:08.405252   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:37:08.437523   30307 logs.go:274] 0 containers: []
	W0801 17:37:08.437537   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:37:08.437599   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:37:08.465828   30307 logs.go:274] 0 containers: []
	W0801 17:37:08.465840   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:37:08.465897   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:37:08.493638   30307 logs.go:274] 0 containers: []
	W0801 17:37:08.493652   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:37:08.493710   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:37:08.522735   30307 logs.go:274] 0 containers: []
	W0801 17:37:08.522747   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:37:08.522804   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:37:08.551736   30307 logs.go:274] 0 containers: []
	W0801 17:37:08.551750   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:37:08.551807   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:37:08.583955   30307 logs.go:274] 0 containers: []
	W0801 17:37:08.583968   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:37:08.584034   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:37:08.613067   30307 logs.go:274] 0 containers: []
	W0801 17:37:08.613103   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:37:08.613170   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:37:08.645175   30307 logs.go:274] 0 containers: []
	W0801 17:37:08.645190   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:37:08.645198   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:37:08.645207   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:37:10.700464   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055212397s)
	I0801 17:37:10.700568   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:37:10.700575   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:37:10.740319   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:37:10.740332   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:37:10.751829   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:37:10.751843   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:37:10.804169   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:37:10.804181   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:37:10.804188   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:37:13.321047   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:13.331208   30307 kubeadm.go:630] restartCluster took 4m6.19787161s
	W0801 17:37:13.331293   30307 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0801 17:37:13.331317   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 17:37:13.749343   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:37:13.758697   30307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:37:13.766123   30307 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:37:13.766171   30307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:37:13.773534   30307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:37:13.773561   30307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:37:14.504027   30307 out.go:204]   - Generating certificates and keys ...
	I0801 17:37:15.128467   30307 out.go:204]   - Booting up control plane ...
	W0801 17:39:10.045604   30307 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 17:39:10.045633   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 17:39:10.468055   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:39:10.477578   30307 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:39:10.477629   30307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:39:10.485644   30307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:39:10.485666   30307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:39:11.219133   30307 out.go:204]   - Generating certificates and keys ...
	I0801 17:39:11.823639   30307 out.go:204]   - Booting up control plane ...
	I0801 17:41:06.739199   30307 kubeadm.go:397] StartCluster complete in 7m59.637942115s
	I0801 17:41:06.739275   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:41:06.768243   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.768256   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:41:06.768314   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:41:06.798174   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.798186   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:41:06.798242   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:41:06.827196   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.827207   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:41:06.827266   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:41:06.857151   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.857164   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:41:06.857221   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:41:06.886482   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.886494   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:41:06.886551   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:41:06.915571   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.915583   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:41:06.915642   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:41:06.946187   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.946200   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:41:06.946261   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:41:06.976305   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.976317   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:41:06.976324   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:41:06.976330   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:41:09.033371   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056995262s)
	I0801 17:41:09.033517   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:41:09.033529   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:41:09.074454   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:41:09.074467   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:41:09.086365   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:41:09.086383   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:41:09.139109   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:41:09.139121   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:41:09.139129   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0801 17:41:09.152961   30307 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 17:41:09.152979   30307 out.go:239] * 
	* 
	W0801 17:41:09.153075   30307 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:41:09.153105   30307 out.go:239] * 
	* 
	W0801 17:41:09.153626   30307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 17:41:09.216113   30307 out.go:177] 
	W0801 17:41:09.258477   30307 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:41:09.258605   30307 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 17:41:09.258689   30307 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 17:41:09.300266   30307 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220801172716-13911 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220801172716-13911
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220801172716-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6",
	        "Created": "2022-08-02T00:27:24.523444703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246645,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:33:03.548358911Z",
	            "FinishedAt": "2022-08-02T00:33:00.53307201Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hosts",
	        "LogPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6-json.log",
	        "Name": "/old-k8s-version-20220801172716-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220801172716-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220801172716-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220801172716-13911",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220801172716-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220801172716-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7033b72c7cb5dd94daf6f66da715470e46ad00b0bd6f037aa3061302fc290971",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50784"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50785"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50787"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50783"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7033b72c7cb5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220801172716-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dfb69a533565",
	                        "old-k8s-version-20220801172716-13911"
	                    ],
	                    "NetworkID": "947fc21b2e0fc27b09dd4dd43b477927d08a61d441a541fee2a6fa712bca71b9",
	                    "EndpointID": "a3b831dd7b0090943b49fd33eab9fa69501e40c1e99428d6b52499a1a33c63e3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 2 (441.880123ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220801172716-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220801172716-13911 logs -n 25: (3.601696663s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                   |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                | enable-default-cni-20220801171037-13911    | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT | 01 Aug 22 17:27 PDT |
	|         | enable-default-cni-20220801171037-13911           |                                            |         |         |                     |                     |
	| start   | -p                                                | kubenet-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT | 01 Aug 22 17:28 PDT |
	|         | kubenet-20220801171037-13911                      |                                            |         |         |                     |                     |
	|         | --memory=2048                                     |                                            |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                            |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                            |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                            |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                            |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                            |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                            |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                            |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:28 PDT | 01 Aug 22 17:28 PDT |
	|         | kubenet-20220801171037-13911                      |                                            |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:29 PDT |
	|         | kubenet-20220801171037-13911                      |                                            |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:31 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                            |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                            |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                            |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                            |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                            |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                            |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220801173625-13911 | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | disable-driver-mounts-20220801173625-13911        |                                            |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT |                     |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:37:45
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:37:45.136795   31047 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:37:45.137023   31047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:37:45.137028   31047 out.go:309] Setting ErrFile to fd 2...
	I0801 17:37:45.137032   31047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:37:45.137145   31047 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:37:45.137612   31047 out.go:303] Setting JSON to false
	I0801 17:37:45.152591   31047 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9436,"bootTime":1659391229,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:37:45.152701   31047 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:37:45.174344   31047 out.go:177] * [no-preload-20220801173626-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:37:45.196180   31047 notify.go:193] Checking for updates...
	I0801 17:37:45.217756   31047 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:37:45.238861   31047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:37:45.260039   31047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:37:45.280936   31047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:37:45.302202   31047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:37:45.324757   31047 config.go:180] Loaded profile config "no-preload-20220801173626-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:37:45.325426   31047 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:37:45.394757   31047 docker.go:137] docker version: linux-20.10.17
	I0801 17:37:45.394914   31047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:37:45.527503   31047 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:37:45.457586218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:37:45.571140   31047 out.go:177] * Using the docker driver based on existing profile
	I0801 17:37:45.592082   31047 start.go:284] selected driver: docker
	I0801 17:37:45.592099   31047 start.go:808] validating driver "docker" against &{Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:45.592198   31047 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:37:45.594452   31047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:37:45.733083   31047 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:37:45.664473823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:37:45.733245   31047 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:37:45.733262   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:37:45.733271   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:37:45.733294   31047 start_flags.go:310] config:
	{Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:45.777022   31047 out.go:177] * Starting control plane node no-preload-20220801173626-13911 in cluster no-preload-20220801173626-13911
	I0801 17:37:45.799262   31047 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:37:45.820970   31047 out.go:177] * Pulling base image ...
	I0801 17:37:45.842197   31047 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:37:45.842217   31047 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:37:45.842421   31047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/config.json ...
	I0801 17:37:45.842537   31047 cache.go:107] acquiring lock: {Name:mkce27c207a7bf01881de4cf2e18a8ec061785d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.842574   31047 cache.go:107] acquiring lock: {Name:mk33f064d166c5a0dc9a025cb9d5db4a25dde34f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.843994   31047 cache.go:107] acquiring lock: {Name:mk83ada496db165959cae463687f409b745fe431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844359   31047 cache.go:107] acquiring lock: {Name:mk1a37bbfd8a0fda4175037a2df9b28a8bff25fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844423   31047 cache.go:107] acquiring lock: {Name:mk8f04950ca6b931221e073d61c347db62721cdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844390   31047 cache.go:107] acquiring lock: {Name:mk885468f27c8850bc0b7933d3a2ff478aab774d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844464   31047 cache.go:107] acquiring lock: {Name:mk3407b9bf31dee0ad589c69c26f0a179fd3a6e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844507   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 exists
	I0801 17:37:45.844473   31047 cache.go:107] acquiring lock: {Name:mk8a29c24e1671055af457da8f29bfaf97f492d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.845147   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 exists
	I0801 17:37:45.845108   31047 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3" took 1.980679ms
	I0801 17:37:45.844483   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0801 17:37:45.845289   31047 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 succeeded
	I0801 17:37:45.845305   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 exists
	I0801 17:37:45.845302   31047 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.789096ms
	I0801 17:37:45.845308   31047 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0" took 2.704085ms
	I0801 17:37:45.845327   31047 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0801 17:37:45.845337   31047 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 succeeded
	I0801 17:37:45.845331   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 exists
	I0801 17:37:45.845313   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 exists
	I0801 17:37:45.845364   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 exists
	I0801 17:37:45.845372   31047 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3" took 1.105087ms
	I0801 17:37:45.845382   31047 cache.go:96] cache image "k8s.gcr.io/pause:3.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7" took 1.189591ms
	I0801 17:37:45.845390   31047 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 succeeded
	I0801 17:37:45.845393   31047 cache.go:80] save to tar file k8s.gcr.io/pause:3.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 succeeded
	I0801 17:37:45.845393   31047 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3" took 1.139616ms
	I0801 17:37:45.845347   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0801 17:37:45.845416   31047 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 succeeded
	I0801 17:37:45.845331   31047 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3" took 1.102183ms
	I0801 17:37:45.845430   31047 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 succeeded
	I0801 17:37:45.845426   31047 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 1.373082ms
	I0801 17:37:45.845440   31047 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0801 17:37:45.845462   31047 cache.go:87] Successfully saved all images to host disk.
	I0801 17:37:45.908069   31047 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:37:45.908096   31047 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:37:45.908107   31047 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:37:45.908147   31047 start.go:371] acquiring machines lock for no-preload-20220801173626-13911: {Name:mkda6e117952af39a3874882bbd203241b49719c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.908210   31047 start.go:375] acquired machines lock for "no-preload-20220801173626-13911" in 52.481µs
	I0801 17:37:45.908230   31047 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:37:45.908238   31047 fix.go:55] fixHost starting: 
	I0801 17:37:45.908457   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:37:45.974772   31047 fix.go:103] recreateIfNeeded on no-preload-20220801173626-13911: state=Stopped err=<nil>
	W0801 17:37:45.974798   31047 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:37:45.996880   31047 out.go:177] * Restarting existing docker container for "no-preload-20220801173626-13911" ...
	I0801 17:37:46.018574   31047 cli_runner.go:164] Run: docker start no-preload-20220801173626-13911
	I0801 17:37:46.384675   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:37:46.457749   31047 kic.go:415] container "no-preload-20220801173626-13911" state is running.
	I0801 17:37:46.458352   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:46.531639   31047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/config.json ...
	I0801 17:37:46.532029   31047 machine.go:88] provisioning docker machine ...
	I0801 17:37:46.532061   31047 ubuntu.go:169] provisioning hostname "no-preload-20220801173626-13911"
	I0801 17:37:46.532140   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:46.605057   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:46.605254   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:46.605270   31047 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220801173626-13911 && echo "no-preload-20220801173626-13911" | sudo tee /etc/hostname
	I0801 17:37:46.733056   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220801173626-13911
	
	I0801 17:37:46.733140   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:46.805118   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:46.805272   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:46.805287   31047 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220801173626-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220801173626-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220801173626-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:37:46.917485   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:37:46.917506   31047 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:37:46.917535   31047 ubuntu.go:177] setting up certificates
	I0801 17:37:46.917541   31047 provision.go:83] configureAuth start
	I0801 17:37:46.917615   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:46.990412   31047 provision.go:138] copyHostCerts
	I0801 17:37:46.990491   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:37:46.990502   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:37:46.990596   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:37:46.990798   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:37:46.990808   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:37:46.990864   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:37:46.991000   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:37:46.991007   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:37:46.991062   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:37:46.991772   31047 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220801173626-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220801173626-13911]
	I0801 17:37:47.183740   31047 provision.go:172] copyRemoteCerts
	I0801 17:37:47.183812   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:37:47.183860   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.256107   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:47.339121   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:37:47.356831   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0801 17:37:47.373830   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 17:37:47.392418   31047 provision.go:86] duration metric: configureAuth took 474.857796ms
	I0801 17:37:47.392433   31047 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:37:47.392595   31047 config.go:180] Loaded profile config "no-preload-20220801173626-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:37:47.392663   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.464884   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.465036   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.465047   31047 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:37:47.579712   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:37:47.579729   31047 ubuntu.go:71] root file system type: overlay
	I0801 17:37:47.579870   31047 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:37:47.579944   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.650983   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.651127   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.651186   31047 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:37:47.774346   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:37:47.774436   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.845704   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.845865   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.845879   31047 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:37:47.964006   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:37:47.964021   31047 machine.go:91] provisioned docker machine in 1.43196114s
	I0801 17:37:47.964037   31047 start.go:307] post-start starting for "no-preload-20220801173626-13911" (driver="docker")
	I0801 17:37:47.964043   31047 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:37:47.964117   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:37:47.964170   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.035712   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.118288   31047 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:37:48.121549   31047 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:37:48.121566   31047 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:37:48.121586   31047 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:37:48.121595   31047 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:37:48.121603   31047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:37:48.121710   31047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:37:48.121847   31047 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:37:48.121999   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:37:48.129029   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:37:48.146801   31047 start.go:310] post-start completed in 182.747614ms
	I0801 17:37:48.146864   31047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:37:48.146917   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.217007   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.300445   31047 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:37:48.304748   31047 fix.go:57] fixHost completed within 2.396472477s
	I0801 17:37:48.304758   31047 start.go:82] releasing machines lock for "no-preload-20220801173626-13911", held for 2.39650437s
	I0801 17:37:48.304820   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:48.374117   31047 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:37:48.374143   31047 ssh_runner.go:195] Run: systemctl --version
	I0801 17:37:48.374196   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.374212   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.449727   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.451539   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.719080   31047 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:37:48.729189   31047 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:37:48.729244   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:37:48.740655   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:37:48.753772   31047 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:37:48.824006   31047 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:37:48.896529   31047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:37:48.963357   31047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:37:49.205490   31047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:37:49.268926   31047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:37:49.323147   31047 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:37:49.332627   31047 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:37:49.332704   31047 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:37:49.336848   31047 start.go:471] Will wait 60s for crictl version
	I0801 17:37:49.336901   31047 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:37:49.441376   31047 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:37:49.441442   31047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:37:49.478518   31047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:37:49.557572   31047 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:37:49.557790   31047 cli_runner.go:164] Run: docker exec -t no-preload-20220801173626-13911 dig +short host.docker.internal
	I0801 17:37:49.686230   31047 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:37:49.686336   31047 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:37:49.690942   31047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:37:49.700964   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:49.771329   31047 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:37:49.771383   31047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:37:49.802366   31047 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:37:49.802385   31047 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:37:49.802458   31047 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:37:49.879052   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:37:49.879064   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:37:49.879080   31047 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:37:49.879096   31047 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220801173626-13911 NodeName:no-preload-20220801173626-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:37:49.879194   31047 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "no-preload-20220801173626-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:37:49.879290   31047 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=no-preload-20220801173626-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:37:49.879351   31047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:37:49.887424   31047 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:37:49.887487   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:37:49.894755   31047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (493 bytes)
	I0801 17:37:49.908266   31047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:37:49.920870   31047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0801 17:37:49.933830   31047 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:37:49.937511   31047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:37:49.946559   31047 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911 for IP: 192.168.67.2
	I0801 17:37:49.946659   31047 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:37:49.946707   31047 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:37:49.946786   31047 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.key
	I0801 17:37:49.946845   31047 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.key.c7fa3a9e
	I0801 17:37:49.946897   31047 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.key
	I0801 17:37:49.947100   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:37:49.947138   31047 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:37:49.947151   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:37:49.947189   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:37:49.947218   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:37:49.947250   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:37:49.947309   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:37:49.947829   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:37:49.964521   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 17:37:49.981144   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:37:49.997236   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 17:37:50.014091   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:37:50.030809   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:37:50.047089   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:37:50.063912   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:37:50.082297   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:37:50.101186   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:37:50.118882   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:37:50.136291   31047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:37:50.149676   31047 ssh_runner.go:195] Run: openssl version
	I0801 17:37:50.163581   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:37:50.171105   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.174935   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.174989   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.179840   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:37:50.186763   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:37:50.194343   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.198345   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.198395   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.203934   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:37:50.210838   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:37:50.218583   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.222458   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.222498   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.227505   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:37:50.234458   31047 kubeadm.go:395] StartCluster: {Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:50.234558   31047 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:37:50.264051   31047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:37:50.271634   31047 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:37:50.271652   31047 kubeadm.go:626] restartCluster start
	I0801 17:37:50.271694   31047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:37:50.278298   31047 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.278364   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:50.349453   31047 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220801173626-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:37:50.349640   31047 kubeconfig.go:127] "no-preload-20220801173626-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:37:50.349966   31047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:37:50.351119   31047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:37:50.358739   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.358794   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.366952   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.567082   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.567203   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.576999   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.769130   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.769340   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.779725   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.969182   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.969292   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.979800   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.167920   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.168015   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.178836   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.367096   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.367205   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.376391   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.569038   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.569130   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.578185   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.769147   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.769333   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.779768   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.967690   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.967807   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.978203   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.168126   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.168251   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.178788   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.367362   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.367477   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.376348   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.569124   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.569313   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.579843   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.767372   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.767476   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.776970   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.968285   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.968420   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.978224   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.168014   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.168103   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.178218   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.369185   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.369348   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.380616   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.380627   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.380671   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.388701   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.388714   31047 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:37:53.388723   31047 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:37:53.388774   31047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:37:53.420707   31047 docker.go:443] Stopping containers: [d5a3d4ccde35 795a7dfc5c0b 9c6c1ed81713 1d852044111d 2f0cbdfcc618 803f6a6ae70d 41e8b95b80bc b8daaea5d97c b53b375d313f be3fbf75c305 482dbbf122e4 5abcdb77ef04 302f547a73d8 5c08de9ffe04 daf4df3d9163 4dd96b3aa0d4]
	I0801 17:37:53.420777   31047 ssh_runner.go:195] Run: docker stop d5a3d4ccde35 795a7dfc5c0b 9c6c1ed81713 1d852044111d 2f0cbdfcc618 803f6a6ae70d 41e8b95b80bc b8daaea5d97c b53b375d313f be3fbf75c305 482dbbf122e4 5abcdb77ef04 302f547a73d8 5c08de9ffe04 daf4df3d9163 4dd96b3aa0d4
	I0801 17:37:53.452120   31047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:37:53.462361   31047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:37:53.469872   31047 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug  2 00:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:36 /etc/kubernetes/scheduler.conf
	
	I0801 17:37:53.469922   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:37:53.477025   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:37:53.483955   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:37:53.490967   31047 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.491012   31047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:37:53.497749   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:37:53.504618   31047 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.504666   31047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:37:53.511317   31047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:37:53.518669   31047 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:37:53.518679   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:53.563806   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.484230   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.652440   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.710862   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.763698   31047 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:37:54.763766   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.273497   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.775502   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.820137   31047 api_server.go:71] duration metric: took 1.056421863s to wait for apiserver process to appear ...
	I0801 17:37:55.820154   31047 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:37:55.820168   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:55.821591   31047 api_server.go:256] stopped: https://127.0.0.1:51289/healthz: Get "https://127.0.0.1:51289/healthz": EOF
	I0801 17:37:56.322368   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.001585   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:37:59.001601   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:37:59.323815   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.331879   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:37:59.331896   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:37:59.821943   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.827351   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:37:59.827368   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:38:00.324020   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:38:00.331405   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 200:
	ok
	I0801 17:38:00.337668   31047 api_server.go:140] control plane version: v1.24.3
	I0801 17:38:00.337681   31047 api_server.go:130] duration metric: took 4.517452084s to wait for apiserver health ...
	I0801 17:38:00.337687   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:38:00.337692   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:38:00.337703   31047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:38:00.344812   31047 system_pods.go:59] 8 kube-system pods found
	I0801 17:38:00.344828   31047 system_pods.go:61] "coredns-6d4b75cb6d-qb7sz" [77b59710-ca1b-4065-bf3b-ee7a85c78408] Running
	I0801 17:38:00.344836   31047 system_pods.go:61] "etcd-no-preload-20220801173626-13911" [e7d936e6-08ca-4c1d-99af-689effe61062] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0801 17:38:00.344843   31047 system_pods.go:61] "kube-apiserver-no-preload-20220801173626-13911" [4e6c4e55-cc13-472a-afbe-59a6a2ec20ad] Running
	I0801 17:38:00.344847   31047 system_pods.go:61] "kube-controller-manager-no-preload-20220801173626-13911" [28fbab73-82d5-4181-8471-d287ef555c41] Running
	I0801 17:38:00.344851   31047 system_pods.go:61] "kube-proxy-2spmx" [34f279f3-ae86-4a39-92bc-978b6b6c44fd] Running
	I0801 17:38:00.344855   31047 system_pods.go:61] "kube-scheduler-no-preload-20220801173626-13911" [8b3b67a0-1d6a-454c-85e1-c104c7bff40e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:38:00.344862   31047 system_pods.go:61] "metrics-server-5c6f97fb75-wrh2c" [9d42bee2-4bb9-4237-8444-831f4c65f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:38:00.344866   31047 system_pods.go:61] "storage-provisioner" [dd76b63a-5481-4315-bfbb-d56bd50aef64] Running
	I0801 17:38:00.344870   31047 system_pods.go:74] duration metric: took 7.163598ms to wait for pod list to return data ...
	I0801 17:38:00.344876   31047 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:38:00.347456   31047 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:38:00.347468   31047 node_conditions.go:123] node cpu capacity is 6
	I0801 17:38:00.347477   31047 node_conditions.go:105] duration metric: took 2.59659ms to run NodePressure ...
	I0801 17:38:00.347486   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:38:00.471283   31047 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0801 17:38:00.475832   31047 kubeadm.go:777] kubelet initialised
	I0801 17:38:00.475844   31047 kubeadm.go:778] duration metric: took 4.548844ms waiting for restarted kubelet to initialise ...
	I0801 17:38:00.475851   31047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:38:00.481039   31047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:00.486739   31047 pod_ready.go:92] pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:00.486750   31047 pod_ready.go:81] duration metric: took 5.697955ms waiting for pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:00.486762   31047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:02.500418   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:05.000962   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:07.001386   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:08.499575   31047 pod_ready.go:92] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:08.499589   31047 pod_ready.go:81] duration metric: took 8.012693599s waiting for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:08.499595   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:10.513107   31047 pod_ready.go:102] pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:12.510113   31047 pod_ready.go:92] pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:12.510126   31047 pod_ready.go:81] duration metric: took 4.010464323s waiting for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:12.510132   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.022615   31047 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.022629   31047 pod_ready.go:81] duration metric: took 1.512455198s waiting for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.022635   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2spmx" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.026883   31047 pod_ready.go:92] pod "kube-proxy-2spmx" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.026894   31047 pod_ready.go:81] duration metric: took 4.246546ms waiting for pod "kube-proxy-2spmx" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.026900   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.030969   31047 pod_ready.go:92] pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.030977   31047 pod_ready.go:81] duration metric: took 4.07323ms waiting for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.030983   31047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:16.041234   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:18.041647   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:20.542837   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:23.041487   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:25.043560   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:27.540915   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:29.543086   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:32.042479   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:34.544640   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:37.044506   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:39.541915   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:41.544271   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:44.041420   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:46.042431   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:48.044498   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:50.543837   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:53.041176   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:55.044380   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:57.541598   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:59.545044   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:02.042789   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:04.044739   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:06.541143   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:08.542691   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	W0801 17:39:10.045604   30307 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 17:39:10.045633   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 17:39:10.468055   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:39:10.477578   30307 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:39:10.477629   30307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:39:10.485644   30307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:39:10.485666   30307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:39:11.219133   30307 out.go:204]   - Generating certificates and keys ...
	I0801 17:39:11.823639   30307 out.go:204]   - Booting up control plane ...
	I0801 17:39:11.042720   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:13.042943   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:15.043258   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:17.043865   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:19.542284   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:21.544438   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:24.040750   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:26.042378   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:28.544524   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:31.041513   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:33.042620   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:35.043058   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:37.543424   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:40.043048   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:42.044820   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:44.541659   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:46.544518   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:49.044133   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:51.543084   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:54.045047   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:56.542567   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:58.545088   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:01.043406   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:03.044252   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:05.542151   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:07.543499   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:09.544345   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:12.045195   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:14.542899   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:16.544629   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:18.545930   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:21.044674   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:23.045379   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:25.545385   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:27.545491   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:30.042095   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:32.043488   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:34.548393   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:37.043300   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:39.546662   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:42.044663   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:44.544152   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:46.545544   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:49.042550   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:51.044633   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:53.542274   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:55.543494   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:58.043271   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:00.043457   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:02.043870   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:04.044318   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:06.739199   30307 kubeadm.go:397] StartCluster complete in 7m59.637942115s
	I0801 17:41:06.739275   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:41:06.768243   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.768256   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:41:06.768314   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:41:06.798174   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.798186   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:41:06.798242   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:41:06.827196   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.827207   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:41:06.827266   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:41:06.857151   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.857164   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:41:06.857221   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:41:06.886482   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.886494   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:41:06.886551   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:41:06.915571   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.915583   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:41:06.915642   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:41:06.946187   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.946200   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:41:06.946261   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:41:06.976305   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.976317   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:41:06.976324   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:41:06.976330   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:41:09.033371   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056995262s)
	I0801 17:41:09.033517   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:41:09.033529   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:41:09.074454   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:41:09.074467   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:41:09.086365   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:41:09.086383   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:41:09.139109   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:41:09.139121   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:41:09.139129   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0801 17:41:09.152961   30307 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 17:41:09.152979   30307 out.go:239] * 
	W0801 17:41:09.153075   30307 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:41:09.153105   30307 out.go:239] * 
	W0801 17:41:09.153626   30307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 17:41:09.216113   30307 out.go:177] 
	W0801 17:41:09.258477   30307 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:41:09.258605   30307 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 17:41:09.258689   30307 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 17:41:09.300266   30307 out.go:177] 
	I0801 17:41:06.045067   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:08.046647   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:33:03 UTC, end at Tue 2022-08-02 00:41:10 UTC. --
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.047508449Z" level=info msg="Processing signal 'terminated'"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.048554008Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.049066697Z" level=info msg="Daemon shutdown complete"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.049140956Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: docker.service: Succeeded.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Stopped Docker Application Container Engine.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Starting Docker Application Container Engine...
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.103993889Z" level=info msg="Starting up"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107258175Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107331231Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107364819Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107377776Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108456849Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108470092Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108484226Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108493814Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.111425754Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.115111191Z" level=info msg="Loading containers: start."
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.188779913Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.218225237Z" level=info msg="Loading containers: done."
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.226251934Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.226311143Z" level=info msg="Daemon has completed initialization"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Started Docker Application Container Engine.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.252520264Z" level=info msg="API listen on [::]:2376"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.256100929Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-08-02T00:41:12Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:41:13 up  1:06,  0 users,  load average: 0.18, 0.64, 0.93
	Linux old-k8s-version-20220801172716-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:33:03 UTC, end at Tue 2022-08-02 00:41:13 UTC. --
	Aug 02 00:41:11 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14468]: I0802 00:41:12.241400   14468 server.go:410] Version: v1.16.0
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14468]: I0802 00:41:12.241605   14468 plugins.go:100] No cloud provider specified.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14468]: I0802 00:41:12.241619   14468 server.go:773] Client rotation is on, will bootstrap in background
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14468]: I0802 00:41:12.243776   14468 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14468]: W0802 00:41:12.244452   14468 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14468]: W0802 00:41:12.244516   14468 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14468]: F0802 00:41:12.244551   14468 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14481]: I0802 00:41:12.987611   14481 server.go:410] Version: v1.16.0
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14481]: I0802 00:41:12.987757   14481 plugins.go:100] No cloud provider specified.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14481]: I0802 00:41:12.987766   14481 server.go:773] Client rotation is on, will bootstrap in background
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14481]: I0802 00:41:12.989191   14481 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14481]: W0802 00:41:12.989829   14481 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14481]: W0802 00:41:12.989892   14481 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 kubelet[14481]: F0802 00:41:12.989915   14481 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 00:41:12 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:41:13.157449   31318 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 2 (449.038719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220801172716-13911" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (492.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (43.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220801172918-13911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
E0801 17:35:44.729021   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:35:48.360843   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:35:54.811269   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911: exit status 2 (16.109763042s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911: exit status 2 (16.104932441s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220801172918-13911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-darwin-amd64 unpause -p embed-certs-20220801172918-13911 --alsologtostderr -v=1: (1.053509309s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220801172918-13911
helpers_test.go:235: (dbg) docker inspect embed-certs-20220801172918-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e",
	        "Created": "2022-08-02T00:29:24.764733922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239100,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:30:27.941404852Z",
	            "FinishedAt": "2022-08-02T00:30:25.954194075Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e/hostname",
	        "HostsPath": "/var/lib/docker/containers/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e/hosts",
	        "LogPath": "/var/lib/docker/containers/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e-json.log",
	        "Name": "/embed-certs-20220801172918-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220801172918-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220801172918-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6409b2cdb50e70d48bc0e2f9fd19921d57344ede11b4f296c3e51d67d8c063ff-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6409b2cdb50e70d48bc0e2f9fd19921d57344ede11b4f296c3e51d67d8c063ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6409b2cdb50e70d48bc0e2f9fd19921d57344ede11b4f296c3e51d67d8c063ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6409b2cdb50e70d48bc0e2f9fd19921d57344ede11b4f296c3e51d67d8c063ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220801172918-13911",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220801172918-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220801172918-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220801172918-13911",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220801172918-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c26a0194c710fd65ea454df30a364a9abd7a135d55fb40b218b72a4e8bce5b6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50644"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50645"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50646"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50647"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50648"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3c26a0194c71",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220801172918-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "36a3296308ce",
	                        "embed-certs-20220801172918-13911"
	                    ],
	                    "NetworkID": "cc902d3931f689ec536b0026cbc9a9824733708535d90fc4f7a0dc8b971e8a42",
	                    "EndpointID": "202710a1eb3f6cd7fb64d18b5e50e9bc0cb248134bab8646aafcc93ada5be5e8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220801172918-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220801172918-13911 logs -n 25: (2.659877151s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p calico-20220801171038-13911                    | calico-20220801171038-13911             | jenkins | v1.26.0 | 01 Aug 22 17:25 PDT | 01 Aug 22 17:25 PDT |
	| start   | -p bridge-20220801171037-13911                    | bridge-20220801171037-13911             | jenkins | v1.26.0 | 01 Aug 22 17:25 PDT | 01 Aug 22 17:26 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p false-20220801171038-13911                     | false-20220801171038-13911              | jenkins | v1.26.0 | 01 Aug 22 17:25 PDT | 01 Aug 22 17:25 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p false-20220801171038-13911                     | false-20220801171038-13911              | jenkins | v1.26.0 | 01 Aug 22 17:26 PDT | 01 Aug 22 17:26 PDT |
	| start   | -p                                                | enable-default-cni-20220801171037-13911 | jenkins | v1.26.0 | 01 Aug 22 17:26 PDT | 01 Aug 22 17:26 PDT |
	|         | enable-default-cni-20220801171037-13911           |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220801171037-13911 | jenkins | v1.26.0 | 01 Aug 22 17:26 PDT | 01 Aug 22 17:26 PDT |
	|         | enable-default-cni-20220801171037-13911           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220801171037-13911                    | bridge-20220801171037-13911             | jenkins | v1.26.0 | 01 Aug 22 17:26 PDT | 01 Aug 22 17:26 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p bridge-20220801171037-13911                    | bridge-20220801171037-13911             | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT | 01 Aug 22 17:27 PDT |
	| delete  | -p                                                | enable-default-cni-20220801171037-13911 | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT | 01 Aug 22 17:27 PDT |
	|         | enable-default-cni-20220801171037-13911           |                                         |         |         |                     |                     |
	| start   | -p                                                | kubenet-20220801171037-13911            | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT | 01 Aug 22 17:28 PDT |
	|         | kubenet-20220801171037-13911                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220801171037-13911            | jenkins | v1.26.0 | 01 Aug 22 17:28 PDT | 01 Aug 22 17:28 PDT |
	|         | kubenet-20220801171037-13911                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220801171037-13911            | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:29 PDT |
	|         | kubenet-20220801171037-13911                      |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:31 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:33:02
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:33:02.092956   30307 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:33:02.093151   30307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:33:02.093156   30307 out.go:309] Setting ErrFile to fd 2...
	I0801 17:33:02.093160   30307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:33:02.093248   30307 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:33:02.093715   30307 out.go:303] Setting JSON to false
	I0801 17:33:02.108781   30307 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9153,"bootTime":1659391229,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:33:02.108901   30307 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:33:02.131071   30307 out.go:177] * [old-k8s-version-20220801172716-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:33:02.207125   30307 notify.go:193] Checking for updates...
	I0801 17:33:02.227733   30307 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:33:02.269750   30307 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:33:02.311846   30307 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:33:02.354020   30307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:33:02.375064   30307 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:33:02.396274   30307 config.go:180] Loaded profile config "old-k8s-version-20220801172716-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0801 17:33:02.417428   30307 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0801 17:33:02.438938   30307 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:33:02.509086   30307 docker.go:137] docker version: linux-20.10.17
	I0801 17:33:02.509230   30307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:33:02.642340   30307 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:33:02.585183315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:33:02.684700   30307 out.go:177] * Using the docker driver based on existing profile
	I0801 17:33:02.705708   30307 start.go:284] selected driver: docker
	I0801 17:33:02.705726   30307 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:02.705810   30307 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:33:02.707990   30307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:33:02.841272   30307 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:33:02.783411359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:33:02.841425   30307 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:33:02.841442   30307 cni.go:95] Creating CNI manager for ""
	I0801 17:33:02.841454   30307 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:33:02.841463   30307 start_flags.go:310] config:
	{Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:02.863560   30307 out.go:177] * Starting control plane node old-k8s-version-20220801172716-13911 in cluster old-k8s-version-20220801172716-13911
	I0801 17:33:02.901007   30307 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:33:02.922018   30307 out.go:177] * Pulling base image ...
	I0801 17:33:02.994914   30307 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:33:02.994956   30307 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:33:02.995023   30307 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0801 17:33:02.995060   30307 cache.go:57] Caching tarball of preloaded images
	I0801 17:33:02.995280   30307 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:33:02.995300   30307 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0801 17:33:02.996429   30307 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json ...
	I0801 17:33:03.060663   30307 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:33:03.060678   30307 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:33:03.060689   30307 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:33:03.060733   30307 start.go:371] acquiring machines lock for old-k8s-version-20220801172716-13911: {Name:mkbe9b0aeba6b12111b317502f6798dbe4170df1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:33:03.060814   30307 start.go:375] acquired machines lock for "old-k8s-version-20220801172716-13911" in 58.105µs
	I0801 17:33:03.060833   30307 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:33:03.060843   30307 fix.go:55] fixHost starting: 
	I0801 17:33:03.061068   30307 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:33:03.128234   30307 fix.go:103] recreateIfNeeded on old-k8s-version-20220801172716-13911: state=Stopped err=<nil>
	W0801 17:33:03.128265   30307 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:33:03.171939   30307 out.go:177] * Restarting existing docker container for "old-k8s-version-20220801172716-13911" ...
	I0801 17:33:03.192980   30307 cli_runner.go:164] Run: docker start old-k8s-version-20220801172716-13911
	I0801 17:33:03.538000   30307 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:33:03.611055   30307 kic.go:415] container "old-k8s-version-20220801172716-13911" state is running.
	I0801 17:33:03.611725   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:03.686263   30307 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json ...
	I0801 17:33:03.686646   30307 machine.go:88] provisioning docker machine ...
	I0801 17:33:03.686671   30307 ubuntu.go:169] provisioning hostname "old-k8s-version-20220801172716-13911"
	I0801 17:33:03.686737   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:03.759719   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:03.759935   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:03.759949   30307 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220801172716-13911 && echo "old-k8s-version-20220801172716-13911" | sudo tee /etc/hostname
	I0801 17:33:03.881107   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220801172716-13911
	
	I0801 17:33:03.881202   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:03.953049   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:03.953193   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:03.953209   30307 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220801172716-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220801172716-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220801172716-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:33:04.068209   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:33:04.068228   30307 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:33:04.068250   30307 ubuntu.go:177] setting up certificates
	I0801 17:33:04.068257   30307 provision.go:83] configureAuth start
	I0801 17:33:04.068317   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:04.140299   30307 provision.go:138] copyHostCerts
	I0801 17:33:04.140379   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:33:04.140388   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:33:04.140472   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:33:04.140693   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:33:04.140702   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:33:04.140790   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:33:04.140960   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:33:04.140968   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:33:04.141026   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:33:04.141173   30307 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220801172716-13911 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220801172716-13911]
	I0801 17:33:04.220622   30307 provision.go:172] copyRemoteCerts
	I0801 17:33:04.220690   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:33:04.220732   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.292178   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:04.375104   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:33:04.392099   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0801 17:33:04.410165   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:33:04.426562   30307 provision.go:86] duration metric: configureAuth took 358.288794ms
	I0801 17:33:04.426574   30307 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:33:04.426746   30307 config.go:180] Loaded profile config "old-k8s-version-20220801172716-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0801 17:33:04.426801   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.497954   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.498129   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.498141   30307 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:33:04.611392   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:33:04.611410   30307 ubuntu.go:71] root file system type: overlay
	I0801 17:33:04.611545   30307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:33:04.611619   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.683157   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.683304   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.683371   30307 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:33:04.808590   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:33:04.808679   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.879830   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.879994   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.880012   30307 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:33:04.997035   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:33:04.997049   30307 machine.go:91] provisioned docker machine in 1.310380032s
	I0801 17:33:04.997056   30307 start.go:307] post-start starting for "old-k8s-version-20220801172716-13911" (driver="docker")
	I0801 17:33:04.997074   30307 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:33:04.997144   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:33:04.997190   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.069168   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.153399   30307 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:33:05.157021   30307 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:33:05.157038   30307 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:33:05.157045   30307 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:33:05.157050   30307 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:33:05.157058   30307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:33:05.157159   30307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:33:05.157296   30307 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:33:05.157452   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:33:05.164984   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:33:05.182269   30307 start.go:310] post-start completed in 185.186568ms
	I0801 17:33:05.182349   30307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:33:05.182412   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.253249   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.336526   30307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:33:05.340949   30307 fix.go:57] fixHost completed within 2.280081452s
	I0801 17:33:05.340961   30307 start.go:82] releasing machines lock for "old-k8s-version-20220801172716-13911", held for 2.280115227s
	I0801 17:33:05.341031   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:05.411603   30307 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:33:05.411607   30307 ssh_runner.go:195] Run: systemctl --version
	I0801 17:33:05.411671   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.411689   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.488484   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.490663   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.760297   30307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:33:05.770249   30307 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:33:05.770315   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:33:05.781723   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:33:05.794766   30307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:33:05.869802   30307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:33:05.934941   30307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:33:06.019332   30307 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:33:06.228189   30307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:33:06.267803   30307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:33:02.019050   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:04.516377   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:06.519806   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:06.346695   30307 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0801 17:33:06.346845   30307 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220801172716-13911 dig +short host.docker.internal
	I0801 17:33:06.475760   30307 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:33:06.475854   30307 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:33:06.480076   30307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:33:06.489496   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:06.561364   30307 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:33:06.561454   30307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:33:06.592913   30307 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:33:06.592929   30307 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:33:06.593009   30307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:33:06.623551   30307 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:33:06.623571   30307 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:33:06.623646   30307 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:33:06.699039   30307 cni.go:95] Creating CNI manager for ""
	I0801 17:33:06.699060   30307 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:33:06.699074   30307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:33:06.699090   30307 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220801172716-13911 NodeName:old-k8s-version-20220801172716-13911 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:33:06.699238   30307 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220801172716-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220801172716-13911
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:33:06.699312   30307 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220801172716-13911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:33:06.699380   30307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0801 17:33:06.706617   30307 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:33:06.706669   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:33:06.713640   30307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0801 17:33:06.727903   30307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:33:06.740699   30307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0801 17:33:06.754028   30307 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:33:06.757691   30307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:33:06.767564   30307 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911 for IP: 192.168.76.2
	I0801 17:33:06.767666   30307 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:33:06.767715   30307 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:33:06.767802   30307 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.key
	I0801 17:33:06.767861   30307 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key.31bdca25
	I0801 17:33:06.767909   30307 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key
	I0801 17:33:06.768129   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:33:06.768165   30307 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:33:06.768179   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:33:06.768215   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:33:06.768244   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:33:06.768273   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:33:06.768343   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:33:06.770066   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:33:06.786809   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:33:06.803930   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:33:06.820293   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:33:06.836640   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:33:06.853270   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:33:06.869959   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:33:06.886388   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:33:06.903049   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:33:06.920046   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:33:06.936329   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:33:06.953108   30307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:33:06.965417   30307 ssh_runner.go:195] Run: openssl version
	I0801 17:33:06.970864   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:33:06.979779   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.983543   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.983586   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.988888   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:33:06.995997   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:33:07.003729   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.007447   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.007493   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.012803   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:33:07.020845   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:33:07.028574   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.032339   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.032378   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.037622   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:33:07.044888   30307 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:07.044982   30307 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:33:07.073047   30307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:33:07.080535   30307 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:33:07.080556   30307 kubeadm.go:626] restartCluster start
	I0801 17:33:07.080608   30307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:33:07.087807   30307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.087873   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:08.520280   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:11.017860   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:07.161019   30307 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220801172716-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:33:07.161188   30307 kubeconfig.go:127] "old-k8s-version-20220801172716-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:33:07.161555   30307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:33:07.162658   30307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:33:07.170122   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.170170   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.178204   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.378464   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.378560   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.388766   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.579693   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.579819   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.590131   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.780063   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.780238   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.791267   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.978733   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.978885   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.988977   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.178638   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.178717   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.187944   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.378810   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.378930   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.389502   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.578776   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.578955   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.589682   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.778805   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.778941   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.790788   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.980073   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.980189   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.990770   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.178462   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.178599   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.188930   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.378914   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.379012   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.389506   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.580573   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.580704   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.591607   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.780347   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.780485   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.790994   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.978646   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.978775   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.989169   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.178855   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:10.178968   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:10.187897   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.187907   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:10.187955   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:10.195605   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.195617   30307 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:33:10.195625   30307 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:33:10.195675   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:33:10.224715   30307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:33:10.234985   30307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:33:10.242805   30307 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Aug  2 00:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5775 Aug  2 00:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Aug  2 00:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Aug  2 00:29 /etc/kubernetes/scheduler.conf
	
	I0801 17:33:10.242857   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:33:10.250643   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:33:10.258189   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:33:10.266321   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:33:10.273876   30307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:33:10.281390   30307 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:33:10.281402   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:10.329953   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.032947   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.233358   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.290594   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.342083   30307 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:33:11.342142   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:11.851910   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:13.020890   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:15.518246   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:12.351846   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:12.851217   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:13.353310   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:13.851936   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:14.353088   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:14.853235   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:15.353275   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:15.852184   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:16.353240   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:16.853252   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:17.519344   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:20.020318   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:17.353304   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:17.853335   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:18.351214   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:18.851526   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:19.351430   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:19.853261   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:20.352524   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:20.851275   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:21.352561   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:21.851472   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:22.518183   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:24.520049   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:26.520871   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:22.351688   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:22.851332   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:23.351357   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:23.851974   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:24.353354   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:24.851825   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:25.353110   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:25.851764   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:26.351912   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:26.851768   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:29.020027   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:31.518304   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:27.351519   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:27.851289   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:28.351671   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:28.851467   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:29.351418   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:29.851312   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:30.351309   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:30.851712   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:31.353119   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:31.852333   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:33.519382   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:35.520795   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:32.351358   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:32.851965   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:33.351587   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:33.852401   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:34.351610   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:34.851477   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:35.351739   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:35.852236   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:36.351836   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:36.852166   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:38.019767   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:40.519078   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:37.351461   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:37.852701   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:38.351889   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:38.853136   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:39.353555   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:39.851668   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:40.351742   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:40.852690   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:41.351542   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:41.851651   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:42.521319   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:45.017879   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:42.351647   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:42.852217   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:43.352460   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:43.851462   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:44.351520   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:44.851542   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:45.352287   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:45.851529   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:46.351462   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:46.853011   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:47.021800   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:49.022351   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:51.518460   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:47.353014   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:47.852957   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:48.351794   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:48.851608   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:49.353132   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:49.852861   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:50.351559   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:50.851826   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:51.351605   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:51.852394   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:53.519760   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:56.020459   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:52.351865   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:52.852613   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:53.352321   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:53.851626   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:54.351598   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:54.851666   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:55.351623   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:55.851667   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:56.351631   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:56.851992   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:58.021113   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:00.519291   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:57.351708   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:57.851772   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:58.351628   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:58.851633   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:59.352270   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:59.851588   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:00.351911   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:00.852107   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:01.352190   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:01.851781   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:02.519479   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:05.019235   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:02.352022   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:02.853040   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:03.352607   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:03.852400   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:04.351810   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:04.851747   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:05.351908   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:05.851982   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:06.353234   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:06.851753   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:07.519283   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:09.520506   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:07.351805   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:07.851765   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:08.353881   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:08.852724   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:09.351746   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:09.853807   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:10.353834   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:10.853159   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:11.352358   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:11.383418   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.383432   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:11.383494   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:11.413072   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.413084   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:11.413142   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:11.442218   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.442230   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:11.442288   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:11.470969   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.470982   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:11.471044   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:11.500295   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.500308   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:11.500367   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:11.533285   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.533298   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:11.533358   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:11.563355   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.563367   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:11.563427   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:11.592445   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.592456   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:11.592479   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:11.592488   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:11.632510   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:11.632522   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:11.644313   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:11.644327   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:11.695794   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:11.695809   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:11.695815   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:11.709396   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:11.709407   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:12.018955   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:14.019462   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:16.519291   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:13.763461   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054019747s)
	I0801 17:34:16.264200   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:16.353932   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:16.385118   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.385130   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:16.385190   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:16.414517   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.414529   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:16.414588   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:16.443356   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.443369   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:16.443435   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:16.477272   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.477285   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:16.477348   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:16.510936   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.510949   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:16.511011   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:16.547639   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.547652   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:16.547713   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:16.578107   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.578119   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:16.578177   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:16.607309   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.607323   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:16.607331   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:16.607339   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:16.645996   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:16.646009   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:16.657128   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:16.657141   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:16.709161   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:16.709176   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:16.709182   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:16.722936   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:16.722954   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:19.021570   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:21.518428   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:18.775009   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052014549s)
	I0801 17:34:21.277564   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:21.354038   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:21.385924   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.385936   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:21.385997   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:21.414350   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.414362   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:21.414418   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:21.444094   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.444107   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:21.444162   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:21.472715   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.472727   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:21.472784   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:21.501199   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.501211   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:21.501288   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:21.534002   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.534016   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:21.534092   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:21.564027   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.564039   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:21.564098   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:21.593121   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.593134   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:21.593143   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:21.593150   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:21.633306   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:21.633320   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:21.645837   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:21.645850   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:21.700543   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:21.700560   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:21.700567   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:21.714946   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:21.714960   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:23.519556   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:26.018392   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:23.771704   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056708133s)
	I0801 17:34:26.272261   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:26.353456   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:26.386051   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.386063   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:26.386119   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:26.415224   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.415236   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:26.415298   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:26.445222   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.445235   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:26.445292   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:26.475024   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.475037   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:26.475097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:26.505006   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.505019   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:26.505077   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:26.542252   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.542265   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:26.542323   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:26.572302   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.572315   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:26.572374   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:26.601432   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.601445   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:26.601452   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:26.601459   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:26.615447   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:26.615459   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:28.520501   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:31.021258   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:28.668228   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052734957s)
	I0801 17:34:28.668338   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:28.668347   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:28.707285   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:28.707298   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:28.718726   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:28.718739   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:28.769688   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:31.270139   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:31.352538   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:31.382379   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.382397   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:31.382466   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:31.414167   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.414180   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:31.414250   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:31.447114   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.447129   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:31.447197   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:31.478169   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.478183   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:31.478244   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:31.508755   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.508767   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:31.508826   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:31.541935   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.541949   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:31.542012   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:31.573200   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.573213   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:31.573271   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:31.601641   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.601654   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:31.601661   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:31.601670   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:31.615421   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:31.615434   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:33.518886   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:35.523280   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:33.667553   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052084288s)
	I0801 17:34:33.667661   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:33.667671   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:33.708058   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:33.708075   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:33.721159   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:33.721175   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:33.773936   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:36.278098   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:36.358158   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:36.389134   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.389146   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:36.389206   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:36.418282   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.418294   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:36.418350   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:36.448321   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.448333   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:36.448391   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:36.477122   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.477138   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:36.477204   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:36.506036   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.506048   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:36.506118   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:36.550984   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.550998   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:36.551060   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:36.579712   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.579725   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:36.579788   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:36.608681   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.608692   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:36.608699   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:36.608706   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:36.648271   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:36.648288   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:36.661072   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:36.661086   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:36.717917   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:36.717928   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:36.717936   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:36.732109   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:36.732124   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:38.029642   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:40.535248   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:38.791203   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053533989s)
	I0801 17:34:41.297687   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:41.369319   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:41.400112   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.400125   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:41.400185   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:41.429000   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.429013   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:41.429077   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:41.457782   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.457794   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:41.457850   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:41.489550   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.489562   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:41.489622   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:41.518587   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.518600   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:41.518658   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:41.549089   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.549101   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:41.549167   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:41.578870   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.578885   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:41.578945   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:41.608653   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.608664   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:41.608671   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:41.608677   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:41.620204   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:41.620216   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:41.673763   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:41.673777   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:41.673784   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:41.688084   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:41.688096   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:42.540376   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:45.042994   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:43.745846   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053708064s)
	I0801 17:34:43.745957   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:43.745964   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:46.290648   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:46.378519   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:46.409191   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.409203   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:46.409260   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:46.438190   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.438201   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:46.438263   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:46.470731   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.470743   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:46.470802   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:46.502588   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.502599   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:46.502655   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:46.531976   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.531988   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:46.532047   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:46.566132   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.566145   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:46.566203   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:46.600014   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.600027   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:46.600083   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:46.629125   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.629137   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:46.629144   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:46.629152   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:46.670158   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:46.670172   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:46.681911   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:46.681922   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:46.735993   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:46.736003   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:46.736010   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:46.750833   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:46.750849   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:47.048072   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:48.041271   30018 pod_ready.go:81] duration metric: took 4m0.004443286s waiting for pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace to be "Ready" ...
	E0801 17:34:48.041294   30018 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0801 17:34:48.041313   30018 pod_ready.go:38] duration metric: took 4m6.545019337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:34:48.041349   30018 kubeadm.go:630] restartCluster took 4m16.250570913s
	W0801 17:34:48.041470   30018 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0801 17:34:48.041499   30018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0801 17:34:50.371547   30018 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.327026044s)
	I0801 17:34:50.371607   30018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:34:50.381203   30018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:34:50.388735   30018 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:34:50.388781   30018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:34:50.395987   30018 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:34:50.396018   30018 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:34:50.668978   30018 out.go:204]   - Generating certificates and keys ...
	I0801 17:34:51.344642   30018 out.go:204]   - Booting up control plane ...
	I0801 17:34:48.809538   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055753562s)
	I0801 17:34:51.313270   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:51.384934   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:51.414232   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.414250   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:51.414304   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:51.441881   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.441894   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:51.441954   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:51.470802   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.470813   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:51.470866   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:51.499238   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.499252   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:51.499316   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:51.527042   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.527055   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:51.527112   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:51.556456   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.556473   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:51.556541   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:51.585716   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.585728   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:51.585797   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:51.615551   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.615565   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:51.615572   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:51.615580   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:53.671946   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054212993s)
	I0801 17:34:53.672054   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:53.672061   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:53.714018   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:53.714031   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:53.725408   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:53.725422   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:53.778549   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:53.778560   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:53.778567   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:56.295298   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:56.390271   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:56.420485   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.420497   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:56.420554   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:56.449383   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.449397   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:56.449453   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:56.478432   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.478444   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:56.478500   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:56.506950   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.506962   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:56.507014   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:56.536393   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.536404   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:56.536463   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:56.565436   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.565449   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:56.565506   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:56.593950   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.593963   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:56.594019   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:56.621932   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.621945   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:56.621953   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:56.621960   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:56.663174   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:56.663190   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:56.675466   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:56.675478   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:56.736252   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:56.736265   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:56.736272   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:56.751881   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:56.751896   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:58.398929   30018 out.go:204]   - Configuring RBAC rules ...
	I0801 17:34:58.775671   30018 cni.go:95] Creating CNI manager for ""
	I0801 17:34:58.775685   30018 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:34:58.775705   30018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:34:58.775788   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=embed-certs-20220801172918-13911 minikube.k8s.io/updated_at=2022_08_01T17_34_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:58.775809   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:58.873180   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:58.932123   30018 ops.go:34] apiserver oom_adj: -16
	I0801 17:34:59.469685   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:59.970374   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:00.470300   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:00.971237   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:01.470890   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:58.810799   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057341572s)
	I0801 17:35:01.313979   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:01.394957   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:01.429935   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.429948   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:01.430007   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:01.458854   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.458869   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:01.458940   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:01.489769   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.489781   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:01.489839   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:01.522081   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.522092   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:01.522152   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:01.552276   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.552288   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:01.552347   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:01.581231   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.581242   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:01.581303   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:01.610456   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.610468   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:01.610527   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:01.640825   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.640838   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:01.640845   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:01.640851   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:01.681164   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:01.681182   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:01.693005   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:01.693020   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:01.745760   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:01.745779   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:01.745785   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:01.760279   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:01.760291   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:01.973266   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:02.473561   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:02.971780   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:03.472297   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:03.974374   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:04.473555   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:04.973245   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:05.475185   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:05.974177   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:06.473667   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:03.814149   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052717763s)
	I0801 17:35:06.317273   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:06.397453   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:06.431739   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.431750   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:06.431808   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:06.460085   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.460096   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:06.460155   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:06.490788   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.490801   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:06.490865   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:06.521225   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.521238   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:06.521296   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:06.551676   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.551690   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:06.551748   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:06.581891   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.581903   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:06.581967   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:06.610415   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.610428   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:06.610487   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:06.638868   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.638881   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:06.638888   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:06.638896   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:06.677340   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:06.677355   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:06.689281   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:06.689296   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:06.741694   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:06.741718   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:06.741724   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:06.757440   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:06.757454   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:06.973851   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:07.474031   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:07.976257   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:08.475196   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:08.974829   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:09.476743   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:09.976344   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:10.475081   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:10.975234   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:11.475356   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:11.646732   30018 kubeadm.go:1045] duration metric: took 12.864781452s to wait for elevateKubeSystemPrivileges.
	I0801 17:35:11.646753   30018 kubeadm.go:397] StartCluster complete in 4m39.875683659s
	I0801 17:35:11.646774   30018 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:35:11.646883   30018 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:35:11.647640   30018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:35:08.810862   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052570896s)
	I0801 17:35:11.312050   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:11.397246   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:11.438278   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.438296   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:11.438374   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:11.469285   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.469299   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:11.469369   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:11.506443   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.506454   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:11.506511   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:11.550600   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.550618   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:11.550696   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:11.587813   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.587828   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:11.587900   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:11.616041   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.616053   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:11.616109   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:11.656883   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.656898   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:11.656974   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:11.687937   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.687953   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:11.687962   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:11.687971   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:11.730338   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:11.730358   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:11.742630   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:11.742643   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:11.795410   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:11.795421   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:11.795429   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:11.809830   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:11.809843   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:12.166973   30018 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220801172918-13911" rescaled to 1
	I0801 17:35:12.167012   30018 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:35:12.188221   30018 out.go:177] * Verifying Kubernetes components...
	I0801 17:35:12.167030   30018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:35:12.167053   30018 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:35:12.167200   30018 config.go:180] Loaded profile config "embed-certs-20220801172918-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:35:12.262549   30018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:35:12.262646   30018 addons.go:65] Setting dashboard=true in profile "embed-certs-20220801172918-13911"
	I0801 17:35:12.262645   30018 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220801172918-13911"
	I0801 17:35:12.262650   30018 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220801172918-13911"
	I0801 17:35:12.262708   30018 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220801172918-13911"
	W0801 17:35:12.262728   30018 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:35:12.262751   30018 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220801172918-13911"
	I0801 17:35:12.262648   30018 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220801172918-13911"
	I0801 17:35:12.262814   30018 host.go:66] Checking if "embed-certs-20220801172918-13911" exists ...
	I0801 17:35:12.262813   30018 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220801172918-13911"
	W0801 17:35:12.262868   30018 addons.go:162] addon metrics-server should already be in state true
	I0801 17:35:12.262679   30018 addons.go:153] Setting addon dashboard=true in "embed-certs-20220801172918-13911"
	I0801 17:35:12.262923   30018 host.go:66] Checking if "embed-certs-20220801172918-13911" exists ...
	W0801 17:35:12.262921   30018 addons.go:162] addon dashboard should already be in state true
	I0801 17:35:12.263003   30018 host.go:66] Checking if "embed-certs-20220801172918-13911" exists ...
	I0801 17:35:12.263311   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.263389   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.264215   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.267920   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.348340   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.348347   30018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 17:35:12.430880   30018 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220801172918-13911"
	I0801 17:35:12.475399   30018 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:35:12.454161   30018 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0801 17:35:12.475414   30018 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:35:12.496555   30018 host.go:66] Checking if "embed-certs-20220801172918-13911" exists ...
	I0801 17:35:12.496620   30018 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:35:12.554590   30018 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:35:12.554611   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:35:12.517315   30018 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0801 17:35:12.529636   30018 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220801172918-13911" to be "Ready" ...
	I0801 17:35:12.554610   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:35:12.554708   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.555094   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.613680   30018 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:35:12.591822   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.650713   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:35:12.650742   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:35:12.650820   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.654361   30018 node_ready.go:49] node "embed-certs-20220801172918-13911" has status "Ready":"True"
	I0801 17:35:12.654387   30018 node_ready.go:38] duration metric: took 62.755869ms waiting for node "embed-certs-20220801172918-13911" to be "Ready" ...
	I0801 17:35:12.654435   30018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:35:12.689556   30018 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:35:12.689581   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:35:12.689651   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.692736   30018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50644 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801172918-13911/id_rsa Username:docker}
	I0801 17:35:12.708311   30018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50644 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801172918-13911/id_rsa Username:docker}
	I0801 17:35:12.720631   30018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9cxff" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:12.753737   30018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50644 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801172918-13911/id_rsa Username:docker}
	I0801 17:35:12.783766   30018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50644 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801172918-13911/id_rsa Username:docker}
	I0801 17:35:12.944858   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:35:12.944871   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:35:12.946350   30018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:35:12.948041   30018 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:35:12.948053   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:35:13.035132   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:35:13.035149   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:35:13.037878   30018 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:35:13.037893   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:35:13.048088   30018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:35:13.128376   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:35:13.128394   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:35:13.140624   30018 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:35:13.140637   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:35:13.226657   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:35:13.226671   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:35:13.228059   30018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:35:13.241179   30018 pod_ready.go:92] pod "coredns-6d4b75cb6d-9cxff" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:13.241198   30018 pod_ready.go:81] duration metric: took 520.389417ms waiting for pod "coredns-6d4b75cb6d-9cxff" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:13.241207   30018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-shsxd" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:13.329185   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:35:13.329199   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:35:13.350206   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:35:13.350242   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:35:13.430373   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:35:13.430393   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:35:13.538425   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:35:13.538447   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:35:13.627865   30018 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.279116488s)
	I0801 17:35:13.627886   30018 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0801 17:35:13.630411   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:35:13.630425   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:35:13.647618   30018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:35:13.958840   30018 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220801172918-13911"
	I0801 17:35:14.835963   30018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.187987488s)
	I0801 17:35:14.877697   30018 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:35:14.936943   30018 addons.go:414] enableAddons completed in 2.769096482s
	I0801 17:35:15.255000   30018 pod_ready.go:102] pod "coredns-6d4b75cb6d-shsxd" in "kube-system" namespace has status "Ready":"False"
	I0801 17:35:16.257299   30018 pod_ready.go:92] pod "coredns-6d4b75cb6d-shsxd" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.257312   30018 pod_ready.go:81] duration metric: took 3.015307872s waiting for pod "coredns-6d4b75cb6d-shsxd" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.257320   30018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.261093   30018 pod_ready.go:92] pod "etcd-embed-certs-20220801172918-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.261101   30018 pod_ready.go:81] duration metric: took 3.774633ms waiting for pod "etcd-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.261106   30018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.265310   30018 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220801172918-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.265319   30018 pod_ready.go:81] duration metric: took 4.206335ms waiting for pod "kube-apiserver-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.265324   30018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.269346   30018 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220801172918-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.269354   30018 pod_ready.go:81] duration metric: took 4.01792ms waiting for pod "kube-controller-manager-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.269360   30018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x9k7x" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.273431   30018 pod_ready.go:92] pod "kube-proxy-x9k7x" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.273438   30018 pod_ready.go:81] duration metric: took 4.073268ms waiting for pod "kube-proxy-x9k7x" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.273444   30018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.653610   30018 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220801172918-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.653620   30018 pod_ready.go:81] duration metric: took 380.082522ms waiting for pod "kube-scheduler-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.653625   30018 pod_ready.go:38] duration metric: took 3.998116209s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:35:16.653640   30018 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:35:16.653687   30018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:16.665575   30018 api_server.go:71] duration metric: took 4.497341647s to wait for apiserver process to appear ...
	I0801 17:35:16.665593   30018 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:35:16.665602   30018 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50648/healthz ...
	I0801 17:35:16.671952   30018 api_server.go:266] https://127.0.0.1:50648/healthz returned 200:
	ok
	I0801 17:35:16.673212   30018 api_server.go:140] control plane version: v1.24.3
	I0801 17:35:16.673221   30018 api_server.go:130] duration metric: took 7.622383ms to wait for apiserver health ...
	I0801 17:35:16.673226   30018 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:35:13.874861   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064395737s)
	I0801 17:35:16.376183   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:16.399312   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:16.430262   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.430275   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:16.430337   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:16.460017   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.460034   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:16.460093   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:16.491848   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.491860   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:16.491920   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:16.521940   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.521955   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:16.522015   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:16.551494   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.551507   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:16.551567   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:16.582166   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.582182   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:16.582246   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:16.613564   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.613577   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:16.613646   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:16.642889   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.642902   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:16.642909   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:16.642916   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:16.705324   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:16.705334   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:16.705340   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:16.719372   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:16.719385   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:16.858133   30018 system_pods.go:59] 9 kube-system pods found
	I0801 17:35:16.858146   30018 system_pods.go:61] "coredns-6d4b75cb6d-9cxff" [3a5893dd-8ee8-436b-bca5-8c49d6224160] Running
	I0801 17:35:16.858150   30018 system_pods.go:61] "coredns-6d4b75cb6d-shsxd" [98813c90-a6e9-4120-9d54-057e7f340516] Running
	I0801 17:35:16.858153   30018 system_pods.go:61] "etcd-embed-certs-20220801172918-13911" [370a0346-c668-4e31-ad3b-6ae311038f95] Running
	I0801 17:35:16.858157   30018 system_pods.go:61] "kube-apiserver-embed-certs-20220801172918-13911" [16705423-1902-408d-bf96-c429bb0b369a] Running
	I0801 17:35:16.858173   30018 system_pods.go:61] "kube-controller-manager-embed-certs-20220801172918-13911" [18063908-5ab2-4a2e-8466-3d65005d104e] Running
	I0801 17:35:16.858180   30018 system_pods.go:61] "kube-proxy-x9k7x" [b4af731a-19c9-4ba9-ab8f-fe20332332d4] Running
	I0801 17:35:16.858188   30018 system_pods.go:61] "kube-scheduler-embed-certs-20220801172918-13911" [cd151a9c-b351-42c1-969b-0f19b6b82b41] Running
	I0801 17:35:16.858198   30018 system_pods.go:61] "metrics-server-5c6f97fb75-ssb94" [07af04bc-f4e5-4715-9a1d-b60f73f55288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:35:16.858206   30018 system_pods.go:61] "storage-provisioner" [4f72400e-5fc3-406e-b35b-742f9cd4d378] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:35:16.858210   30018 system_pods.go:74] duration metric: took 184.937881ms to wait for pod list to return data ...
	I0801 17:35:16.858216   30018 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:35:17.054360   30018 default_sa.go:45] found service account: "default"
	I0801 17:35:17.054375   30018 default_sa.go:55] duration metric: took 196.108679ms for default service account to be created ...
	I0801 17:35:17.054386   30018 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 17:35:17.260263   30018 system_pods.go:86] 9 kube-system pods found
	I0801 17:35:17.260281   30018 system_pods.go:89] "coredns-6d4b75cb6d-9cxff" [3a5893dd-8ee8-436b-bca5-8c49d6224160] Running
	I0801 17:35:17.260286   30018 system_pods.go:89] "coredns-6d4b75cb6d-shsxd" [98813c90-a6e9-4120-9d54-057e7f340516] Running
	I0801 17:35:17.260290   30018 system_pods.go:89] "etcd-embed-certs-20220801172918-13911" [370a0346-c668-4e31-ad3b-6ae311038f95] Running
	I0801 17:35:17.260294   30018 system_pods.go:89] "kube-apiserver-embed-certs-20220801172918-13911" [16705423-1902-408d-bf96-c429bb0b369a] Running
	I0801 17:35:17.260300   30018 system_pods.go:89] "kube-controller-manager-embed-certs-20220801172918-13911" [18063908-5ab2-4a2e-8466-3d65005d104e] Running
	I0801 17:35:17.260306   30018 system_pods.go:89] "kube-proxy-x9k7x" [b4af731a-19c9-4ba9-ab8f-fe20332332d4] Running
	I0801 17:35:17.260315   30018 system_pods.go:89] "kube-scheduler-embed-certs-20220801172918-13911" [cd151a9c-b351-42c1-969b-0f19b6b82b41] Running
	I0801 17:35:17.260331   30018 system_pods.go:89] "metrics-server-5c6f97fb75-ssb94" [07af04bc-f4e5-4715-9a1d-b60f73f55288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:35:17.260342   30018 system_pods.go:89] "storage-provisioner" [4f72400e-5fc3-406e-b35b-742f9cd4d378] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:35:17.260351   30018 system_pods.go:126] duration metric: took 205.899962ms to wait for k8s-apps to be running ...
	I0801 17:35:17.260367   30018 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 17:35:17.260425   30018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:35:17.275620   30018 system_svc.go:56] duration metric: took 15.249277ms WaitForService to wait for kubelet.
	I0801 17:35:17.275640   30018 kubeadm.go:572] duration metric: took 5.107267858s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 17:35:17.275656   30018 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:35:17.454892   30018 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:35:17.454918   30018 node_conditions.go:123] node cpu capacity is 6
	I0801 17:35:17.454929   30018 node_conditions.go:105] duration metric: took 179.230034ms to run NodePressure ...
	I0801 17:35:17.454950   30018 start.go:216] waiting for startup goroutines ...
	I0801 17:35:17.495749   30018 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:35:17.517809   30018 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220801172918-13911" cluster and "default" namespace by default
	I0801 17:35:18.776640   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056791977s)
	I0801 17:35:18.776769   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:18.776779   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:18.825208   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:18.825237   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:21.339622   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:21.400203   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:21.433513   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.433525   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:21.433585   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:21.479281   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.479293   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:21.479351   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:21.528053   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.528075   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:21.528152   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:21.570823   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.570842   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:21.570914   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:21.622051   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.622066   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:21.622120   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:21.662421   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.662433   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:21.662494   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:21.700986   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.701004   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:21.701071   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:21.761715   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.761733   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:21.761744   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:21.761754   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:21.812508   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:21.812527   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:21.829925   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:21.829963   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:21.894716   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:21.894731   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:21.894740   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:21.915852   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:21.915872   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:23.988264   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.072039463s)
	I0801 17:35:26.488923   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:26.902539   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:26.934000   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.934013   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:26.934097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:26.962321   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.962333   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:26.962392   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:26.991695   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.991707   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:26.991767   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:27.019837   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.019849   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:27.019909   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:27.049346   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.049358   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:27.049416   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:27.078615   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.078626   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:27.078682   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:27.107692   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.107705   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:27.107764   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:27.135696   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.135711   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:27.135718   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:27.135726   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:27.179734   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:27.179751   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:27.192465   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:27.192482   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:27.246895   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:27.246908   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:27.246915   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:27.260599   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:27.260611   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:29.314532   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053665084s)
	I0801 17:35:31.815083   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:31.903100   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:31.934197   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.934208   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:31.934264   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:31.963017   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.963028   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:31.963086   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:31.993025   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.993039   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:31.993098   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:32.022103   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.022116   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:32.022174   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:32.051243   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.051255   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:32.051310   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:32.081226   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.081238   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:32.081294   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:32.110522   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.110535   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:32.110593   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:32.139913   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.139927   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:32.139935   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:32.139943   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:32.181780   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:32.181796   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:32.194244   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:32.194258   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:32.244454   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:32.244465   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:32.244472   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:32.258059   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:32.258071   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:34.313901   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05563403s)
	I0801 17:35:36.814353   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:36.902028   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:36.932525   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.932537   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:36.932595   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:36.965941   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.965952   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:36.966010   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:36.997194   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.997206   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:36.997265   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:37.027992   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.028004   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:37.028058   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:37.057894   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.057906   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:37.057963   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:37.091455   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.091467   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:37.091527   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:37.127099   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.127112   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:37.127168   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:37.164814   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.216228   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:37.216316   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:37.216333   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:37.259473   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:37.259490   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:37.271319   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:37.271338   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:37.326930   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:37.326944   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:37.326956   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:37.342336   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:37.342350   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:39.395576   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053071303s)
	I0801 17:35:41.898084   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:42.402026   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:42.434886   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.434900   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:42.434955   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:42.464377   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.464389   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:42.464445   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:42.492747   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.492759   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:42.492818   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:42.521139   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.521153   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:42.521209   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:42.550296   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.550307   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:42.550363   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:42.579268   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.579281   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:42.579338   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:42.608287   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.608299   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:42.608352   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:42.637135   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.637150   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:42.637163   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:42.637175   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:42.650659   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:42.650670   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:44.706919   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056125086s)
	I0801 17:35:44.707024   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:44.707030   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:44.746683   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:44.746696   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:44.757796   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:44.757808   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:44.810488   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:47.311546   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:47.402199   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:47.431779   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.431796   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:47.431878   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:47.462490   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.462504   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:47.462563   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:47.491434   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.491447   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:47.491504   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:47.520881   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.520894   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:47.520968   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:47.550517   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.550529   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:47.550584   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:47.580190   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.580205   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:47.580261   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:47.608687   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.608698   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:47.608757   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:47.638031   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.638044   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:47.638051   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:47.638057   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:47.649363   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:47.649376   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:47.701537   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:47.701547   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:47.701554   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:47.714906   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:47.714918   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:49.767687   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052668978s)
	I0801 17:35:49.767793   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:49.767799   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:52.306896   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:52.404357   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:52.435516   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.435528   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:52.435587   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:52.466505   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.466517   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:52.466576   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:52.495280   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.495292   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:52.495351   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:52.523452   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.523464   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:52.523522   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:52.552296   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.552308   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:52.552367   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:52.582614   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.582628   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:52.582686   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:52.611494   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.611510   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:52.611571   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:52.643062   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.643073   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:52.643081   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:52.643088   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:52.683875   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:52.683894   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:52.696292   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:52.696306   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:52.751367   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:52.751385   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:52.751398   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:52.764882   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:52.764895   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:54.823481   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058501379s)
	I0801 17:35:57.325623   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:57.404554   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:57.435795   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.435806   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:57.435864   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:57.464534   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.464547   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:57.464609   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:57.493563   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.493576   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:57.493631   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:57.521806   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.521818   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:57.521876   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:57.550038   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.550052   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:57.550128   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:57.584225   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.584251   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:57.584312   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:57.613276   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.613289   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:57.613348   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:57.641915   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.641927   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:57.641934   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:57.641942   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:57.681293   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:57.681305   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:57.692507   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:57.692519   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:57.744366   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:57.744377   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:57.744384   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:57.758258   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:57.758270   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:59.813771   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055426001s)
	I0801 17:36:02.314786   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:02.403097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:02.432291   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.432303   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:02.432366   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:02.462408   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.462420   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:02.462478   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:02.491149   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.491167   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:02.491224   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:02.519302   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.519315   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:02.519372   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:02.548267   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.548281   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:02.548342   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:02.576524   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.576538   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:02.576595   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:02.605216   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.605228   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:02.605287   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:02.634873   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.634885   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:02.634892   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:02.634902   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:02.648952   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:02.648965   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:04.701091   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052060777s)
	I0801 17:36:04.701205   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:04.701212   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:04.740173   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:04.740190   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:04.751825   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:04.751838   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:04.803705   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:07.306022   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:07.404847   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:07.435505   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.435517   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:07.435573   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:07.463625   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.463637   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:07.463694   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:07.491535   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.491547   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:07.491610   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:07.520843   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.520855   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:07.520914   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:07.549909   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.549922   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:07.549979   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:07.578735   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.578749   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:07.578812   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:07.609291   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.609304   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:07.609360   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:07.638717   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.638731   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:07.638739   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:07.638746   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:07.650180   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:07.650194   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:07.708994   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:07.709004   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:07.709011   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:07.722398   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:07.722410   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:09.776740   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054270118s)
	I0801 17:36:09.776854   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:09.776862   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:30:28 UTC, end at Tue 2022-08-02 00:36:15 UTC. --
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.242833046Z" level=info msg="ignoring event" container=33d943ff01785e46ec269ea38c5537c9061a638d248bdbca1f4d9b964cad8ae5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.310572569Z" level=info msg="ignoring event" container=a8d996d9b18afe65405259056b23780e13dcafcfa4dadbe0a010e19c4e82effe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.389321095Z" level=info msg="ignoring event" container=e5610145c6feff9fc41be28eb0297dba482d643698ae08440f77ce82d69aa8f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.467139780Z" level=info msg="ignoring event" container=f2f8895ba66f35588ef1decfd2361a7af03babec5c977b47d37dc0be37897e01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.532070377Z" level=info msg="ignoring event" container=a8da684b35304f3e02c9af9174e6cd8273f50961cbd3510fa95feaa0b0ef11da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.601971814Z" level=info msg="ignoring event" container=1673a38e152c3c347b2a1d6111ad96c81b4ecd7489d60918795a16a66f6f8184 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.680608258Z" level=info msg="ignoring event" container=1e2a1561550fd9425759cb5d62096c799bb6d8d07766d0baa078a833e7840f02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.752343636Z" level=info msg="ignoring event" container=d8e0058ff4097f002da86cda4b0d201e903ecdf31e4c4c5887af2c0ffef14c89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.867065825Z" level=info msg="ignoring event" container=68e502f0033cb39346e7c3b665f5e295c0f6aed47243cf0feee2a94978e1f42c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.935003880Z" level=info msg="ignoring event" container=0e6dbe63e82dbc9579d26f29de928d47583a733aaae2f275dc7fe74ac0b7175f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:50 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:50.023355144Z" level=info msg="ignoring event" container=02237302c9e424373e4cedf08288baefb7b510b9ab354c2efddf7ec54a1e6032 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:14 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:14.564908110Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:14 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:14.565003761Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:14 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:14.566326202Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:16 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:16.305662800Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:35:16 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:16.600730986Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:35:19 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:19.374987682Z" level=info msg="ignoring event" container=40bc6ff544632d13729d6a699345b16ed533c8707457996a49d84f2354f55b04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:19 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:19.558597794Z" level=info msg="ignoring event" container=d18306f0dc48f3c1c5a1278d2211b23770e50ee329c7cd601b5d1f50e38a6773 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:19 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:19.962066991Z" level=info msg="ignoring event" container=537070bd27fe914e34196cb9e7ccee69444b543bb90b8e7cd4c17f2d6544f797 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:20 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:20.121317516Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Aug 02 00:35:20 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:20.852763154Z" level=info msg="ignoring event" container=93c2501241449b2ba013b38a8d7fa6af1623b332eb5fda06c7bd25a4d777c2b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:28 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:28.764716042Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:28 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:28.765068304Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:28 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:28.766221735Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:37 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:37.455852516Z" level=info msg="ignoring event" container=bc1913d66d5a68da924b860e256788e3d479d9ae2613c786d0fa96008de3dbcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	bc1913d66d5a6       a90209bb39e3d                                                                                    39 seconds ago       Exited              dashboard-metrics-scraper   2                   9ecf70971fc58
	85e487f4c7c0b       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   a1466bd874fca
	41ced93d20477       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   6d9cc4db534c0
	d1dbd2b7715b3       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   034f4758d0a35
	af9849a5e63b0       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   e0dc213ab39ca
	b3ff7d2aea220       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   5ec897f807247
	6ca99271de288       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   c2943a4f5c988
	4d084d04150f4       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   43ae96b09d011
	c3f4ade928adb       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   3235c8a5c24ab
	
	* 
	* ==> coredns [af9849a5e63b] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220801172918-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220801172918-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=embed-certs-20220801172918-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_34_58_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:34:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220801172918-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:36:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:36:12 +0000   Tue, 02 Aug 2022 00:36:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:36:12 +0000   Tue, 02 Aug 2022 00:36:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:36:12 +0000   Tue, 02 Aug 2022 00:36:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Aug 2022 00:36:12 +0000   Tue, 02 Aug 2022 00:36:12 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220801172918-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                6be68503-085a-4635-9350-f578be5c27e0
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9cxff                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-embed-certs-20220801172918-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-embed-certs-20220801172918-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-embed-certs-20220801172918-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-x9k7x                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-embed-certs-20220801172918-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 metrics-server-5c6f97fb75-ssb94                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-vmfnk                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-8fcx8                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 61s              kube-proxy       
	  Normal  NodeReady                77s              kubelet          Node embed-certs-20220801172918-13911 status is now: NodeReady
	  Normal  Starting                 77s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  77s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  77s              kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s              kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s              kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           64s              node-controller  Node embed-certs-20220801172918-13911 event: Registered Node embed-certs-20220801172918-13911 in Controller
	  Normal  NodeNotReady             3s               node-controller  Node embed-certs-20220801172918-13911 status is now: NodeNotReady
	  Normal  Starting                 3s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s (x2 over 3s)  kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s (x2 over 3s)  kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s (x2 over 3s)  kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s               kubelet          Node embed-certs-20220801172918-13911 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [4d084d04150f] <==
	* {"level":"info","ts":"2022-08-02T00:34:53.019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-08-02T00:34:53.019Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220801172918-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:34:53.420Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:34:53.420Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:34:53.421Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:34:53.421Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:34:53.421Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:34:53.421Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  00:36:15 up  1:01,  0 users,  load average: 0.95, 1.06, 1.12
	Linux embed-certs-20220801172918-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c3f4ade928ad] <==
	* I0802 00:34:57.801876       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:34:58.567636       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:34:58.572879       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0802 00:34:58.581249       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:34:58.663627       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:35:11.465798       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0802 00:35:11.481781       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0802 00:35:13.734945       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:35:13.949702       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.103.102.61]
	W0802 00:35:14.755759       1 handler_proxy.go:102] no RequestInfo found in the context
	W0802 00:35:14.755870       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:35:14.755907       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:35:14.755914       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0802 00:35:14.756053       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:35:14.756958       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0802 00:35:14.764192       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.83.44]
	I0802 00:35:14.830459       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.99.89.46]
	W0802 00:36:14.720251       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:36:14.720329       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:36:14.720337       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:36:14.721427       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:36:14.721480       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:36:14.721486       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6ca99271de28] <==
	* I0802 00:35:14.673901       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:35:14.679001       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:35:14.679149       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:35:14.679180       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:35:14.719983       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:35:14.723172       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:35:14.723171       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:35:14.723188       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:35:14.723220       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:35:14.730814       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:35:14.730858       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:35:14.755676       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-8fcx8"
	I0802 00:35:14.768883       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-vmfnk"
	E0802 00:36:12.308816       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0802 00:36:12.317342       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0802 00:36:12.385329       1 event.go:294] "Event occurred" object="embed-certs-20220801172918-13911" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node embed-certs-20220801172918-13911 status is now: NodeNotReady"
	I0802 00:36:12.404794       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-embed-certs-20220801172918-13911" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.409059       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d-9cxff" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.415377       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.422122       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-8fcx8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.437592       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-x9k7x" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.485571       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-embed-certs-20220801172918-13911" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.490756       1 event.go:294] "Event occurred" object="kube-system/etcd-embed-certs-20220801172918-13911" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.500049       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0802 00:36:12.500155       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-embed-certs-20220801172918-13911" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [d1dbd2b7715b] <==
	* I0802 00:35:13.641118       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:35:13.641167       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:35:13.641229       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:35:13.731464       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:35:13.731514       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:35:13.731523       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:35:13.731532       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:35:13.731554       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:35:13.731719       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:35:13.732067       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:35:13.732074       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:35:13.733051       1 config.go:317] "Starting service config controller"
	I0802 00:35:13.733063       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:35:13.733077       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:35:13.733079       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:35:13.733629       1 config.go:444] "Starting node config controller"
	I0802 00:35:13.733637       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:35:13.833450       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 00:35:13.833539       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:35:13.835253       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b3ff7d2aea22] <==
	* W0802 00:34:55.740483       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 00:34:55.740495       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 00:34:55.740743       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 00:34:55.740824       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 00:34:55.740755       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 00:34:55.740936       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 00:34:55.741015       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 00:34:55.741047       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 00:34:55.741052       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 00:34:55.741062       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 00:34:56.629207       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:34:56.629244       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:34:56.633374       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 00:34:56.633408       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 00:34:56.641993       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:34:56.642026       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 00:34:56.645302       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 00:34:56.645338       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 00:34:56.739131       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 00:34:56.739149       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 00:34:56.846700       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 00:34:56.846741       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 00:34:56.893217       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 00:34:56.893254       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0802 00:34:59.037069       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:30:28 UTC, end at Tue 2022-08-02 00:36:16 UTC. --
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.054335    9804 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.090742    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4af731a-19c9-4ba9-ab8f-fe20332332d4-xtables-lock\") pod \"kube-proxy-x9k7x\" (UID: \"b4af731a-19c9-4ba9-ab8f-fe20332332d4\") " pod="kube-system/kube-proxy-x9k7x"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.090829    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zgxb\" (UniqueName: \"kubernetes.io/projected/b4af731a-19c9-4ba9-ab8f-fe20332332d4-kube-api-access-5zgxb\") pod \"kube-proxy-x9k7x\" (UID: \"b4af731a-19c9-4ba9-ab8f-fe20332332d4\") " pod="kube-system/kube-proxy-x9k7x"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.090854    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a5893dd-8ee8-436b-bca5-8c49d6224160-config-volume\") pod \"coredns-6d4b75cb6d-9cxff\" (UID: \"3a5893dd-8ee8-436b-bca5-8c49d6224160\") " pod="kube-system/coredns-6d4b75cb6d-9cxff"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.090920    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0d867994-9c56-41dc-9234-3dd9bbe748ef-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-8fcx8\" (UID: \"0d867994-9c56-41dc-9234-3dd9bbe748ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-8fcx8"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.090957    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4af731a-19c9-4ba9-ab8f-fe20332332d4-lib-modules\") pod \"kube-proxy-x9k7x\" (UID: \"b4af731a-19c9-4ba9-ab8f-fe20332332d4\") " pod="kube-system/kube-proxy-x9k7x"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.090998    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd4wk\" (UniqueName: \"kubernetes.io/projected/3a5893dd-8ee8-436b-bca5-8c49d6224160-kube-api-access-nd4wk\") pod \"coredns-6d4b75cb6d-9cxff\" (UID: \"3a5893dd-8ee8-436b-bca5-8c49d6224160\") " pod="kube-system/coredns-6d4b75cb6d-9cxff"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091037    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6466af83-b5c4-4761-b138-0b5c803c81fd-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-vmfnk\" (UID: \"6466af83-b5c4-4761-b138-0b5c803c81fd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vmfnk"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091140    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/07af04bc-f4e5-4715-9a1d-b60f73f55288-tmp-dir\") pod \"metrics-server-5c6f97fb75-ssb94\" (UID: \"07af04bc-f4e5-4715-9a1d-b60f73f55288\") " pod="kube-system/metrics-server-5c6f97fb75-ssb94"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091195    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qztlv\" (UniqueName: \"kubernetes.io/projected/4f72400e-5fc3-406e-b35b-742f9cd4d378-kube-api-access-qztlv\") pod \"storage-provisioner\" (UID: \"4f72400e-5fc3-406e-b35b-742f9cd4d378\") " pod="kube-system/storage-provisioner"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091249    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4af731a-19c9-4ba9-ab8f-fe20332332d4-kube-proxy\") pod \"kube-proxy-x9k7x\" (UID: \"b4af731a-19c9-4ba9-ab8f-fe20332332d4\") " pod="kube-system/kube-proxy-x9k7x"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091299    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdczj\" (UniqueName: \"kubernetes.io/projected/6466af83-b5c4-4761-b138-0b5c803c81fd-kube-api-access-jdczj\") pod \"dashboard-metrics-scraper-dffd48c4c-vmfnk\" (UID: \"6466af83-b5c4-4761-b138-0b5c803c81fd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vmfnk"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091336    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphb7\" (UniqueName: \"kubernetes.io/projected/0d867994-9c56-41dc-9234-3dd9bbe748ef-kube-api-access-jphb7\") pod \"kubernetes-dashboard-5fd5574d9f-8fcx8\" (UID: \"0d867994-9c56-41dc-9234-3dd9bbe748ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-8fcx8"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091358    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcf88\" (UniqueName: \"kubernetes.io/projected/07af04bc-f4e5-4715-9a1d-b60f73f55288-kube-api-access-jcf88\") pod \"metrics-server-5c6f97fb75-ssb94\" (UID: \"07af04bc-f4e5-4715-9a1d-b60f73f55288\") " pod="kube-system/metrics-server-5c6f97fb75-ssb94"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091373    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f72400e-5fc3-406e-b35b-742f9cd4d378-tmp\") pod \"storage-provisioner\" (UID: \"4f72400e-5fc3-406e-b35b-742f9cd4d378\") " pod="kube-system/storage-provisioner"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091395    9804 reconciler.go:157] "Reconciler: start to sync state"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:14.289401    9804 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220801172918-13911\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220801172918-13911"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:14.656946    9804 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220801172918-13911\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220801172918-13911"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:14.855674    9804 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220801172918-13911\" already exists" pod="kube-system/etcd-embed-certs-20220801172918-13911"
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:15.053384    9804 request.go:601] Waited for 1.05075005s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.116757    9804 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220801172918-13911\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220801172918-13911"
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.194384    9804 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.194468    9804 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3a5893dd-8ee8-436b-bca5-8c49d6224160-config-volume podName:3a5893dd-8ee8-436b-bca5-8c49d6224160 nodeName:}" failed. No retries permitted until 2022-08-02 00:36:15.694452676 +0000 UTC m=+3.174427119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3a5893dd-8ee8-436b-bca5-8c49d6224160-config-volume") pod "coredns-6d4b75cb6d-9cxff" (UID: "3a5893dd-8ee8-436b-bca5-8c49d6224160") : failed to sync configmap cache: timed out waiting for the condition
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.194623    9804 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.194738    9804 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b4af731a-19c9-4ba9-ab8f-fe20332332d4-kube-proxy podName:b4af731a-19c9-4ba9-ab8f-fe20332332d4 nodeName:}" failed. No retries permitted until 2022-08-02 00:36:15.694726311 +0000 UTC m=+3.174700755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b4af731a-19c9-4ba9-ab8f-fe20332332d4-kube-proxy") pod "kube-proxy-x9k7x" (UID: "b4af731a-19c9-4ba9-ab8f-fe20332332d4") : failed to sync configmap cache: timed out waiting for the condition
	
	* 
	* ==> kubernetes-dashboard [85e487f4c7c0] <==
	* 2022/08/02 00:35:25 Starting overwatch
	2022/08/02 00:35:25 Using namespace: kubernetes-dashboard
	2022/08/02 00:35:25 Using in-cluster config to connect to apiserver
	2022/08/02 00:35:25 Using secret token for csrf signing
	2022/08/02 00:35:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/08/02 00:35:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/08/02 00:35:25 Successful initial request to the apiserver, version: v1.24.3
	2022/08/02 00:35:25 Generating JWE encryption key
	2022/08/02 00:35:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/08/02 00:35:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/08/02 00:35:25 Initializing JWE encryption key from synchronized object
	2022/08/02 00:35:25 Creating in-cluster Sidecar client
	2022/08/02 00:35:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:35:25 Serving insecurely on HTTP port: 9090
	2022/08/02 00:36:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [41ced93d2047] <==
	* I0802 00:35:14.672021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 00:35:14.724512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 00:35:14.725138       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 00:35:14.734508       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 00:35:14.734746       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220801172918-13911_8377462b-da99-49cf-8410-3e85e4e99b51!
	I0802 00:35:14.734744       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"52ace0d2-f308-4ead-b9ec-29e0d77bdfe0", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220801172918-13911_8377462b-da99-49cf-8410-3e85e4e99b51 became leader
	I0802 00:35:14.835860       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220801172918-13911_8377462b-da99-49cf-8410-3e85e4e99b51!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220801172918-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-ssb94
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220801172918-13911 describe pod metrics-server-5c6f97fb75-ssb94
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220801172918-13911 describe pod metrics-server-5c6f97fb75-ssb94: exit status 1 (275.150472ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-ssb94" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220801172918-13911 describe pod metrics-server-5c6f97fb75-ssb94: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220801172918-13911
helpers_test.go:235: (dbg) docker inspect embed-certs-20220801172918-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e",
	        "Created": "2022-08-02T00:29:24.764733922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239100,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:30:27.941404852Z",
	            "FinishedAt": "2022-08-02T00:30:25.954194075Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e/hostname",
	        "HostsPath": "/var/lib/docker/containers/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e/hosts",
	        "LogPath": "/var/lib/docker/containers/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e/36a3296308ce140f4e668deaf97371e34302ab3706299022313d3afe596cc69e-json.log",
	        "Name": "/embed-certs-20220801172918-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220801172918-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220801172918-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6409b2cdb50e70d48bc0e2f9fd19921d57344ede11b4f296c3e51d67d8c063ff-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6409b2cdb50e70d48bc0e2f9fd19921d57344ede11b4f296c3e51d67d8c063ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6409b2cdb50e70d48bc0e2f9fd19921d57344ede11b4f296c3e51d67d8c063ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6409b2cdb50e70d48bc0e2f9fd19921d57344ede11b4f296c3e51d67d8c063ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220801172918-13911",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220801172918-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220801172918-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220801172918-13911",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220801172918-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c26a0194c710fd65ea454df30a364a9abd7a135d55fb40b218b72a4e8bce5b6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50644"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50645"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50646"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50647"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50648"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3c26a0194c71",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220801172918-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "36a3296308ce",
	                        "embed-certs-20220801172918-13911"
	                    ],
	                    "NetworkID": "cc902d3931f689ec536b0026cbc9a9824733708535d90fc4f7a0dc8b971e8a42",
	                    "EndpointID": "202710a1eb3f6cd7fb64d18b5e50e9bc0cb248134bab8646aafcc93ada5be5e8",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220801172918-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220801172918-13911 logs -n 25: (2.804285532s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p calico-20220801171038-13911                    | calico-20220801171038-13911             | jenkins | v1.26.0 | 01 Aug 22 17:25 PDT | 01 Aug 22 17:25 PDT |
	| start   | -p bridge-20220801171037-13911                    | bridge-20220801171037-13911             | jenkins | v1.26.0 | 01 Aug 22 17:25 PDT | 01 Aug 22 17:26 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p false-20220801171038-13911                     | false-20220801171038-13911              | jenkins | v1.26.0 | 01 Aug 22 17:25 PDT | 01 Aug 22 17:25 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p false-20220801171038-13911                     | false-20220801171038-13911              | jenkins | v1.26.0 | 01 Aug 22 17:26 PDT | 01 Aug 22 17:26 PDT |
	| start   | -p                                                | enable-default-cni-20220801171037-13911 | jenkins | v1.26.0 | 01 Aug 22 17:26 PDT | 01 Aug 22 17:26 PDT |
	|         | enable-default-cni-20220801171037-13911           |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220801171037-13911 | jenkins | v1.26.0 | 01 Aug 22 17:26 PDT | 01 Aug 22 17:26 PDT |
	|         | enable-default-cni-20220801171037-13911           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220801171037-13911                    | bridge-20220801171037-13911             | jenkins | v1.26.0 | 01 Aug 22 17:26 PDT | 01 Aug 22 17:26 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p bridge-20220801171037-13911                    | bridge-20220801171037-13911             | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT | 01 Aug 22 17:27 PDT |
	| delete  | -p                                                | enable-default-cni-20220801171037-13911 | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT | 01 Aug 22 17:27 PDT |
	|         | enable-default-cni-20220801171037-13911           |                                         |         |         |                     |                     |
	| start   | -p                                                | kubenet-20220801171037-13911            | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT | 01 Aug 22 17:28 PDT |
	|         | kubenet-20220801171037-13911                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:27 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220801171037-13911            | jenkins | v1.26.0 | 01 Aug 22 17:28 PDT | 01 Aug 22 17:28 PDT |
	|         | kubenet-20220801171037-13911                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220801171037-13911            | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:29 PDT |
	|         | kubenet-20220801171037-13911                      |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:31 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911    | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220801172918-13911        | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:33:02
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:33:02.092956   30307 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:33:02.093151   30307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:33:02.093156   30307 out.go:309] Setting ErrFile to fd 2...
	I0801 17:33:02.093160   30307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:33:02.093248   30307 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:33:02.093715   30307 out.go:303] Setting JSON to false
	I0801 17:33:02.108781   30307 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9153,"bootTime":1659391229,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:33:02.108901   30307 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:33:02.131071   30307 out.go:177] * [old-k8s-version-20220801172716-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:33:02.207125   30307 notify.go:193] Checking for updates...
	I0801 17:33:02.227733   30307 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:33:02.269750   30307 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:33:02.311846   30307 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:33:02.354020   30307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:33:02.375064   30307 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:33:02.396274   30307 config.go:180] Loaded profile config "old-k8s-version-20220801172716-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0801 17:33:02.417428   30307 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0801 17:33:02.438938   30307 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:33:02.509086   30307 docker.go:137] docker version: linux-20.10.17
	I0801 17:33:02.509230   30307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:33:02.642340   30307 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:33:02.585183315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:33:02.684700   30307 out.go:177] * Using the docker driver based on existing profile
	I0801 17:33:02.705708   30307 start.go:284] selected driver: docker
	I0801 17:33:02.705726   30307 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:02.705810   30307 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:33:02.707990   30307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:33:02.841272   30307 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:33:02.783411359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:33:02.841425   30307 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:33:02.841442   30307 cni.go:95] Creating CNI manager for ""
	I0801 17:33:02.841454   30307 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:33:02.841463   30307 start_flags.go:310] config:
	{Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:02.863560   30307 out.go:177] * Starting control plane node old-k8s-version-20220801172716-13911 in cluster old-k8s-version-20220801172716-13911
	I0801 17:33:02.901007   30307 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:33:02.922018   30307 out.go:177] * Pulling base image ...
	I0801 17:33:02.994914   30307 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:33:02.994956   30307 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:33:02.995023   30307 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0801 17:33:02.995060   30307 cache.go:57] Caching tarball of preloaded images
	I0801 17:33:02.995280   30307 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:33:02.995300   30307 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0801 17:33:02.996429   30307 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json ...
	I0801 17:33:03.060663   30307 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:33:03.060678   30307 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:33:03.060689   30307 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:33:03.060733   30307 start.go:371] acquiring machines lock for old-k8s-version-20220801172716-13911: {Name:mkbe9b0aeba6b12111b317502f6798dbe4170df1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:33:03.060814   30307 start.go:375] acquired machines lock for "old-k8s-version-20220801172716-13911" in 58.105µs
	I0801 17:33:03.060833   30307 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:33:03.060843   30307 fix.go:55] fixHost starting: 
	I0801 17:33:03.061068   30307 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:33:03.128234   30307 fix.go:103] recreateIfNeeded on old-k8s-version-20220801172716-13911: state=Stopped err=<nil>
	W0801 17:33:03.128265   30307 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:33:03.171939   30307 out.go:177] * Restarting existing docker container for "old-k8s-version-20220801172716-13911" ...
	I0801 17:33:03.192980   30307 cli_runner.go:164] Run: docker start old-k8s-version-20220801172716-13911
	I0801 17:33:03.538000   30307 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220801172716-13911 --format={{.State.Status}}
	I0801 17:33:03.611055   30307 kic.go:415] container "old-k8s-version-20220801172716-13911" state is running.
	I0801 17:33:03.611725   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:03.686263   30307 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/config.json ...
	I0801 17:33:03.686646   30307 machine.go:88] provisioning docker machine ...
	I0801 17:33:03.686671   30307 ubuntu.go:169] provisioning hostname "old-k8s-version-20220801172716-13911"
	I0801 17:33:03.686737   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:03.759719   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:03.759935   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:03.759949   30307 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220801172716-13911 && echo "old-k8s-version-20220801172716-13911" | sudo tee /etc/hostname
	I0801 17:33:03.881107   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220801172716-13911
	
	I0801 17:33:03.881202   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:03.953049   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:03.953193   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:03.953209   30307 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220801172716-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220801172716-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220801172716-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:33:04.068209   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:33:04.068228   30307 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:33:04.068250   30307 ubuntu.go:177] setting up certificates
	I0801 17:33:04.068257   30307 provision.go:83] configureAuth start
	I0801 17:33:04.068317   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:04.140299   30307 provision.go:138] copyHostCerts
	I0801 17:33:04.140379   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:33:04.140388   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:33:04.140472   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:33:04.140693   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:33:04.140702   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:33:04.140790   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:33:04.140960   30307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:33:04.140968   30307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:33:04.141026   30307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:33:04.141173   30307 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220801172716-13911 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220801172716-13911]
	I0801 17:33:04.220622   30307 provision.go:172] copyRemoteCerts
	I0801 17:33:04.220690   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:33:04.220732   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.292178   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:04.375104   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:33:04.392099   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0801 17:33:04.410165   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:33:04.426562   30307 provision.go:86] duration metric: configureAuth took 358.288794ms
	I0801 17:33:04.426574   30307 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:33:04.426746   30307 config.go:180] Loaded profile config "old-k8s-version-20220801172716-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0801 17:33:04.426801   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.497954   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.498129   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.498141   30307 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:33:04.611392   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:33:04.611410   30307 ubuntu.go:71] root file system type: overlay
	I0801 17:33:04.611545   30307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:33:04.611619   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.683157   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.683304   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.683371   30307 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:33:04.808590   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:33:04.808679   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:04.879830   30307 main.go:134] libmachine: Using SSH client type: native
	I0801 17:33:04.879994   30307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50784 <nil> <nil>}
	I0801 17:33:04.880012   30307 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:33:04.997035   30307 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:33:04.997049   30307 machine.go:91] provisioned docker machine in 1.310380032s
	I0801 17:33:04.997056   30307 start.go:307] post-start starting for "old-k8s-version-20220801172716-13911" (driver="docker")
	I0801 17:33:04.997074   30307 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:33:04.997144   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:33:04.997190   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.069168   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.153399   30307 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:33:05.157021   30307 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:33:05.157038   30307 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:33:05.157045   30307 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:33:05.157050   30307 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:33:05.157058   30307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:33:05.157159   30307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:33:05.157296   30307 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:33:05.157452   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:33:05.164984   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:33:05.182269   30307 start.go:310] post-start completed in 185.186568ms
	I0801 17:33:05.182349   30307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:33:05.182412   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.253249   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.336526   30307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:33:05.340949   30307 fix.go:57] fixHost completed within 2.280081452s
	I0801 17:33:05.340961   30307 start.go:82] releasing machines lock for "old-k8s-version-20220801172716-13911", held for 2.280115227s
	I0801 17:33:05.341031   30307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220801172716-13911
	I0801 17:33:05.411603   30307 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:33:05.411607   30307 ssh_runner.go:195] Run: systemctl --version
	I0801 17:33:05.411671   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.411689   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:05.488484   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.490663   30307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50784 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/old-k8s-version-20220801172716-13911/id_rsa Username:docker}
	I0801 17:33:05.760297   30307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:33:05.770249   30307 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:33:05.770315   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:33:05.781723   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:33:05.794766   30307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:33:05.869802   30307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:33:05.934941   30307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:33:06.019332   30307 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:33:06.228189   30307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:33:06.267803   30307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:33:02.019050   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:04.516377   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:06.519806   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:06.346695   30307 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0801 17:33:06.346845   30307 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220801172716-13911 dig +short host.docker.internal
	I0801 17:33:06.475760   30307 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:33:06.475854   30307 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:33:06.480076   30307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:33:06.489496   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:06.561364   30307 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 17:33:06.561454   30307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:33:06.592913   30307 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:33:06.592929   30307 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:33:06.593009   30307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:33:06.623551   30307 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0801 17:33:06.623571   30307 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:33:06.623646   30307 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:33:06.699039   30307 cni.go:95] Creating CNI manager for ""
	I0801 17:33:06.699060   30307 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:33:06.699074   30307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:33:06.699090   30307 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220801172716-13911 NodeName:old-k8s-version-20220801172716-13911 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:33:06.699238   30307 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220801172716-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220801172716-13911
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:33:06.699312   30307 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220801172716-13911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:33:06.699380   30307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0801 17:33:06.706617   30307 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:33:06.706669   30307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:33:06.713640   30307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0801 17:33:06.727903   30307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:33:06.740699   30307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0801 17:33:06.754028   30307 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:33:06.757691   30307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:33:06.767564   30307 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911 for IP: 192.168.76.2
	I0801 17:33:06.767666   30307 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:33:06.767715   30307 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:33:06.767802   30307 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/client.key
	I0801 17:33:06.767861   30307 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key.31bdca25
	I0801 17:33:06.767909   30307 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key
	I0801 17:33:06.768129   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:33:06.768165   30307 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:33:06.768179   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:33:06.768215   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:33:06.768244   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:33:06.768273   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:33:06.768343   30307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:33:06.770066   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:33:06.786809   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:33:06.803930   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:33:06.820293   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801172716-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:33:06.836640   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:33:06.853270   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:33:06.869959   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:33:06.886388   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:33:06.903049   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:33:06.920046   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:33:06.936329   30307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:33:06.953108   30307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:33:06.965417   30307 ssh_runner.go:195] Run: openssl version
	I0801 17:33:06.970864   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:33:06.979779   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.983543   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.983586   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:33:06.988888   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:33:06.995997   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:33:07.003729   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.007447   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.007493   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:33:07.012803   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:33:07.020845   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:33:07.028574   30307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.032339   30307 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.032378   30307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:33:07.037622   30307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:33:07.044888   30307 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220801172716-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220801172716-13911 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:33:07.044982   30307 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:33:07.073047   30307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:33:07.080535   30307 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:33:07.080556   30307 kubeadm.go:626] restartCluster start
	I0801 17:33:07.080608   30307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:33:07.087807   30307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.087873   30307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220801172716-13911
	I0801 17:33:08.520280   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:11.017860   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:07.161019   30307 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220801172716-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:33:07.161188   30307 kubeconfig.go:127] "old-k8s-version-20220801172716-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:33:07.161555   30307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:33:07.162658   30307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:33:07.170122   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.170170   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.178204   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.378464   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.378560   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.388766   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.579693   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.579819   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.590131   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.780063   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.780238   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.791267   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:07.978733   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:07.978885   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:07.988977   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.178638   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.178717   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.187944   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.378810   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.378930   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.389502   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.578776   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.578955   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.589682   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.778805   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.778941   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.790788   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:08.980073   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:08.980189   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:08.990770   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.178462   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.178599   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.188930   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.378914   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.379012   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.389506   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.580573   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.580704   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.591607   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.780347   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.780485   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.790994   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:09.978646   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:09.978775   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:09.989169   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.178855   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:10.178968   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:10.187897   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.187907   30307 api_server.go:165] Checking apiserver status ...
	I0801 17:33:10.187955   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:33:10.195605   30307 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:33:10.195617   30307 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:33:10.195625   30307 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:33:10.195675   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:33:10.224715   30307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:33:10.234985   30307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:33:10.242805   30307 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Aug  2 00:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5775 Aug  2 00:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Aug  2 00:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Aug  2 00:29 /etc/kubernetes/scheduler.conf
	
	I0801 17:33:10.242857   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:33:10.250643   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:33:10.258189   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:33:10.266321   30307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:33:10.273876   30307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:33:10.281390   30307 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:33:10.281402   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:10.329953   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.032947   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.233358   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.290594   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:33:11.342083   30307 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:33:11.342142   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:11.851910   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:13.020890   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:15.518246   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:12.351846   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:12.851217   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:13.353310   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:13.851936   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:14.353088   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:14.853235   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:15.353275   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:15.852184   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:16.353240   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:16.853252   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:17.519344   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:20.020318   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:17.353304   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:17.853335   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:18.351214   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:18.851526   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:19.351430   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:19.853261   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:20.352524   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:20.851275   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:21.352561   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:21.851472   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:22.518183   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:24.520049   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:26.520871   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:22.351688   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:22.851332   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:23.351357   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:23.851974   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:24.353354   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:24.851825   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:25.353110   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:25.851764   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:26.351912   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:26.851768   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:29.020027   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:31.518304   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:27.351519   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:27.851289   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:28.351671   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:28.851467   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:29.351418   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:29.851312   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:30.351309   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:30.851712   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:31.353119   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:31.852333   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:33.519382   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:35.520795   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:32.351358   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:32.851965   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:33.351587   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:33.852401   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:34.351610   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:34.851477   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:35.351739   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:35.852236   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:36.351836   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:36.852166   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:38.019767   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:40.519078   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:37.351461   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:37.852701   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:38.351889   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:38.853136   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:39.353555   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:39.851668   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:40.351742   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:40.852690   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:41.351542   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:41.851651   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:42.521319   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:45.017879   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:42.351647   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:42.852217   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:43.352460   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:43.851462   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:44.351520   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:44.851542   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:45.352287   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:45.851529   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:46.351462   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:46.853011   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:47.021800   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:49.022351   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:51.518460   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:47.353014   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:47.852957   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:48.351794   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:48.851608   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:49.353132   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:49.852861   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:50.351559   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:50.851826   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:51.351605   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:51.852394   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:53.519760   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:56.020459   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:52.351865   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:52.852613   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:53.352321   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:53.851626   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:54.351598   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:54.851666   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:55.351623   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:55.851667   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:56.351631   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:56.851992   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:58.021113   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:00.519291   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:33:57.351708   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:57.851772   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:58.351628   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:58.851633   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:59.352270   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:33:59.851588   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:00.351911   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:00.852107   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:01.352190   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:01.851781   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:02.519479   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:05.019235   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:02.352022   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:02.853040   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:03.352607   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:03.852400   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:04.351810   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:04.851747   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:05.351908   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:05.851982   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:06.353234   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:06.851753   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:07.519283   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:09.520506   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:07.351805   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:07.851765   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:08.353881   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:08.852724   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:09.351746   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:09.853807   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:10.353834   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:10.853159   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:11.352358   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:11.383418   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.383432   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:11.383494   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:11.413072   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.413084   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:11.413142   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:11.442218   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.442230   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:11.442288   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:11.470969   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.470982   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:11.471044   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:11.500295   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.500308   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:11.500367   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:11.533285   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.533298   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:11.533358   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:11.563355   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.563367   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:11.563427   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:11.592445   30307 logs.go:274] 0 containers: []
	W0801 17:34:11.592456   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:11.592479   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:11.592488   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:11.632510   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:11.632522   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:11.644313   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:11.644327   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:11.695794   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:11.695809   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:11.695815   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:11.709396   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:11.709407   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:12.018955   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:14.019462   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:16.519291   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:13.763461   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054019747s)
	I0801 17:34:16.264200   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:16.353932   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:16.385118   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.385130   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:16.385190   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:16.414517   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.414529   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:16.414588   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:16.443356   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.443369   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:16.443435   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:16.477272   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.477285   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:16.477348   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:16.510936   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.510949   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:16.511011   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:16.547639   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.547652   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:16.547713   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:16.578107   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.578119   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:16.578177   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:16.607309   30307 logs.go:274] 0 containers: []
	W0801 17:34:16.607323   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:16.607331   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:16.607339   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:16.645996   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:16.646009   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:16.657128   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:16.657141   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:16.709161   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:16.709176   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:16.709182   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:16.722936   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:16.722954   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:19.021570   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:21.518428   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:18.775009   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052014549s)
	I0801 17:34:21.277564   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:21.354038   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:21.385924   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.385936   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:21.385997   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:21.414350   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.414362   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:21.414418   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:21.444094   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.444107   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:21.444162   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:21.472715   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.472727   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:21.472784   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:21.501199   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.501211   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:21.501288   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:21.534002   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.534016   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:21.534092   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:21.564027   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.564039   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:21.564098   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:21.593121   30307 logs.go:274] 0 containers: []
	W0801 17:34:21.593134   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:21.593143   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:21.593150   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:21.633306   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:21.633320   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:21.645837   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:21.645850   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:21.700543   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:21.700560   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:21.700567   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:21.714946   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:21.714960   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:23.519556   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:26.018392   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:23.771704   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056708133s)
	I0801 17:34:26.272261   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:26.353456   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:26.386051   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.386063   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:26.386119   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:26.415224   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.415236   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:26.415298   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:26.445222   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.445235   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:26.445292   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:26.475024   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.475037   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:26.475097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:26.505006   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.505019   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:26.505077   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:26.542252   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.542265   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:26.542323   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:26.572302   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.572315   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:26.572374   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:26.601432   30307 logs.go:274] 0 containers: []
	W0801 17:34:26.601445   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:26.601452   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:26.601459   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:26.615447   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:26.615459   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:28.520501   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:31.021258   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:28.668228   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052734957s)
	I0801 17:34:28.668338   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:28.668347   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:28.707285   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:28.707298   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:28.718726   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:28.718739   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:28.769688   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:31.270139   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:31.352538   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:31.382379   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.382397   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:31.382466   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:31.414167   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.414180   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:31.414250   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:31.447114   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.447129   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:31.447197   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:31.478169   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.478183   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:31.478244   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:31.508755   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.508767   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:31.508826   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:31.541935   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.541949   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:31.542012   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:31.573200   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.573213   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:31.573271   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:31.601641   30307 logs.go:274] 0 containers: []
	W0801 17:34:31.601654   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:31.601661   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:31.601670   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:31.615421   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:31.615434   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:33.518886   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:35.523280   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:33.667553   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052084288s)
	I0801 17:34:33.667661   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:33.667671   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:33.708058   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:33.708075   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:33.721159   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:33.721175   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:33.773936   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:36.278098   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:36.358158   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:36.389134   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.389146   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:36.389206   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:36.418282   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.418294   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:36.418350   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:36.448321   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.448333   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:36.448391   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:36.477122   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.477138   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:36.477204   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:36.506036   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.506048   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:36.506118   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:36.550984   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.550998   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:36.551060   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:36.579712   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.579725   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:36.579788   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:36.608681   30307 logs.go:274] 0 containers: []
	W0801 17:34:36.608692   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:36.608699   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:36.608706   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:36.648271   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:36.648288   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:36.661072   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:36.661086   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:36.717917   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:36.717928   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:36.717936   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:36.732109   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:36.732124   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:38.029642   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:40.535248   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:38.791203   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053533989s)
	I0801 17:34:41.297687   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:41.369319   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:41.400112   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.400125   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:41.400185   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:41.429000   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.429013   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:41.429077   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:41.457782   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.457794   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:41.457850   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:41.489550   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.489562   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:41.489622   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:41.518587   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.518600   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:41.518658   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:41.549089   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.549101   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:41.549167   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:41.578870   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.578885   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:41.578945   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:41.608653   30307 logs.go:274] 0 containers: []
	W0801 17:34:41.608664   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:41.608671   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:41.608677   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:41.620204   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:41.620216   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:41.673763   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:41.673777   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:41.673784   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:41.688084   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:41.688096   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:42.540376   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:45.042994   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:43.745846   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053708064s)
	I0801 17:34:43.745957   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:43.745964   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:46.290648   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:46.378519   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:46.409191   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.409203   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:46.409260   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:46.438190   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.438201   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:46.438263   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:46.470731   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.470743   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:46.470802   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:46.502588   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.502599   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:46.502655   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:46.531976   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.531988   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:46.532047   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:46.566132   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.566145   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:46.566203   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:46.600014   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.600027   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:46.600083   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:46.629125   30307 logs.go:274] 0 containers: []
	W0801 17:34:46.629137   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:46.629144   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:46.629152   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:46.670158   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:46.670172   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:46.681911   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:46.681922   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:46.735993   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:46.736003   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:46.736010   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:46.750833   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:46.750849   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:47.048072   30018 pod_ready.go:102] pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace has status "Ready":"False"
	I0801 17:34:48.041271   30018 pod_ready.go:81] duration metric: took 4m0.004443286s waiting for pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace to be "Ready" ...
	E0801 17:34:48.041294   30018 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-jxjtw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0801 17:34:48.041313   30018 pod_ready.go:38] duration metric: took 4m6.545019337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:34:48.041349   30018 kubeadm.go:630] restartCluster took 4m16.250570913s
	W0801 17:34:48.041470   30018 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0801 17:34:48.041499   30018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0801 17:34:50.371547   30018 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.327026044s)
	I0801 17:34:50.371607   30018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:34:50.381203   30018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:34:50.388735   30018 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:34:50.388781   30018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:34:50.395987   30018 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:34:50.396018   30018 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:34:50.668978   30018 out.go:204]   - Generating certificates and keys ...
	I0801 17:34:51.344642   30018 out.go:204]   - Booting up control plane ...
	I0801 17:34:48.809538   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055753562s)
	I0801 17:34:51.313270   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:51.384934   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:51.414232   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.414250   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:51.414304   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:51.441881   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.441894   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:51.441954   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:51.470802   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.470813   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:51.470866   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:51.499238   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.499252   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:51.499316   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:51.527042   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.527055   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:51.527112   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:51.556456   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.556473   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:51.556541   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:51.585716   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.585728   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:51.585797   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:51.615551   30307 logs.go:274] 0 containers: []
	W0801 17:34:51.615565   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:51.615572   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:51.615580   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:53.671946   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054212993s)
	I0801 17:34:53.672054   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:53.672061   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:53.714018   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:53.714031   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:53.725408   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:53.725422   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:53.778549   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:53.778560   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:53.778567   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:56.295298   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:34:56.390271   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:34:56.420485   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.420497   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:34:56.420554   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:34:56.449383   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.449397   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:34:56.449453   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:34:56.478432   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.478444   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:34:56.478500   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:34:56.506950   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.506962   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:34:56.507014   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:34:56.536393   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.536404   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:34:56.536463   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:34:56.565436   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.565449   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:34:56.565506   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:34:56.593950   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.593963   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:34:56.594019   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:34:56.621932   30307 logs.go:274] 0 containers: []
	W0801 17:34:56.621945   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:34:56.621953   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:34:56.621960   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:34:56.663174   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:34:56.663190   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:34:56.675466   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:34:56.675478   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:34:56.736252   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:34:56.736265   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:34:56.736272   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:34:56.751881   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:34:56.751896   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:34:58.398929   30018 out.go:204]   - Configuring RBAC rules ...
	I0801 17:34:58.775671   30018 cni.go:95] Creating CNI manager for ""
	I0801 17:34:58.775685   30018 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:34:58.775705   30018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:34:58.775788   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=embed-certs-20220801172918-13911 minikube.k8s.io/updated_at=2022_08_01T17_34_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:58.775809   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:58.873180   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:58.932123   30018 ops.go:34] apiserver oom_adj: -16
	I0801 17:34:59.469685   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:59.970374   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:00.470300   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:00.971237   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:01.470890   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:34:58.810799   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057341572s)
	I0801 17:35:01.313979   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:01.394957   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:01.429935   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.429948   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:01.430007   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:01.458854   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.458869   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:01.458940   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:01.489769   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.489781   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:01.489839   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:01.522081   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.522092   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:01.522152   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:01.552276   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.552288   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:01.552347   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:01.581231   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.581242   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:01.581303   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:01.610456   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.610468   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:01.610527   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:01.640825   30307 logs.go:274] 0 containers: []
	W0801 17:35:01.640838   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:01.640845   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:01.640851   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:01.681164   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:01.681182   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:01.693005   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:01.693020   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:01.745760   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:01.745779   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:01.745785   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:01.760279   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:01.760291   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:01.973266   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:02.473561   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:02.971780   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:03.472297   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:03.974374   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:04.473555   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:04.973245   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:05.475185   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:05.974177   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:06.473667   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:03.814149   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052717763s)
	I0801 17:35:06.317273   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:06.397453   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:06.431739   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.431750   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:06.431808   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:06.460085   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.460096   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:06.460155   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:06.490788   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.490801   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:06.490865   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:06.521225   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.521238   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:06.521296   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:06.551676   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.551690   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:06.551748   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:06.581891   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.581903   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:06.581967   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:06.610415   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.610428   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:06.610487   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:06.638868   30307 logs.go:274] 0 containers: []
	W0801 17:35:06.638881   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:06.638888   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:06.638896   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:06.677340   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:06.677355   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:06.689281   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:06.689296   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:06.741694   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:06.741718   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:06.741724   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:06.757440   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:06.757454   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:06.973851   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:07.474031   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:07.976257   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:08.475196   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:08.974829   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:09.476743   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:09.976344   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:10.475081   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:10.975234   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:11.475356   30018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:35:11.646732   30018 kubeadm.go:1045] duration metric: took 12.864781452s to wait for elevateKubeSystemPrivileges.
	I0801 17:35:11.646753   30018 kubeadm.go:397] StartCluster complete in 4m39.875683659s
	I0801 17:35:11.646774   30018 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:35:11.646883   30018 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:35:11.647640   30018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:35:08.810862   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052570896s)
	I0801 17:35:11.312050   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:11.397246   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:11.438278   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.438296   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:11.438374   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:11.469285   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.469299   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:11.469369   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:11.506443   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.506454   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:11.506511   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:11.550600   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.550618   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:11.550696   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:11.587813   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.587828   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:11.587900   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:11.616041   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.616053   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:11.616109   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:11.656883   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.656898   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:11.656974   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:11.687937   30307 logs.go:274] 0 containers: []
	W0801 17:35:11.687953   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:11.687962   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:11.687971   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:11.730338   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:11.730358   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:11.742630   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:11.742643   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:11.795410   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:11.795421   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:11.795429   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:11.809830   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:11.809843   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:12.166973   30018 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220801172918-13911" rescaled to 1
	I0801 17:35:12.167012   30018 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:35:12.188221   30018 out.go:177] * Verifying Kubernetes components...
	I0801 17:35:12.167030   30018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:35:12.167053   30018 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:35:12.167200   30018 config.go:180] Loaded profile config "embed-certs-20220801172918-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:35:12.262549   30018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:35:12.262646   30018 addons.go:65] Setting dashboard=true in profile "embed-certs-20220801172918-13911"
	I0801 17:35:12.262645   30018 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220801172918-13911"
	I0801 17:35:12.262650   30018 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220801172918-13911"
	I0801 17:35:12.262708   30018 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220801172918-13911"
	W0801 17:35:12.262728   30018 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:35:12.262751   30018 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220801172918-13911"
	I0801 17:35:12.262648   30018 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220801172918-13911"
	I0801 17:35:12.262814   30018 host.go:66] Checking if "embed-certs-20220801172918-13911" exists ...
	I0801 17:35:12.262813   30018 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220801172918-13911"
	W0801 17:35:12.262868   30018 addons.go:162] addon metrics-server should already be in state true
	I0801 17:35:12.262679   30018 addons.go:153] Setting addon dashboard=true in "embed-certs-20220801172918-13911"
	I0801 17:35:12.262923   30018 host.go:66] Checking if "embed-certs-20220801172918-13911" exists ...
	W0801 17:35:12.262921   30018 addons.go:162] addon dashboard should already be in state true
	I0801 17:35:12.263003   30018 host.go:66] Checking if "embed-certs-20220801172918-13911" exists ...
	I0801 17:35:12.263311   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.263389   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.264215   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.267920   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.348340   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.348347   30018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 17:35:12.430880   30018 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220801172918-13911"
	I0801 17:35:12.475399   30018 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:35:12.454161   30018 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0801 17:35:12.475414   30018 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:35:12.496555   30018 host.go:66] Checking if "embed-certs-20220801172918-13911" exists ...
	I0801 17:35:12.496620   30018 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:35:12.554590   30018 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:35:12.554611   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:35:12.517315   30018 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0801 17:35:12.529636   30018 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220801172918-13911" to be "Ready" ...
	I0801 17:35:12.554610   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:35:12.554708   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.555094   30018 cli_runner.go:164] Run: docker container inspect embed-certs-20220801172918-13911 --format={{.State.Status}}
	I0801 17:35:12.613680   30018 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:35:12.591822   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.650713   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:35:12.650742   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:35:12.650820   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.654361   30018 node_ready.go:49] node "embed-certs-20220801172918-13911" has status "Ready":"True"
	I0801 17:35:12.654387   30018 node_ready.go:38] duration metric: took 62.755869ms waiting for node "embed-certs-20220801172918-13911" to be "Ready" ...
	I0801 17:35:12.654435   30018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:35:12.689556   30018 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:35:12.689581   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:35:12.689651   30018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801172918-13911
	I0801 17:35:12.692736   30018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50644 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801172918-13911/id_rsa Username:docker}
	I0801 17:35:12.708311   30018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50644 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801172918-13911/id_rsa Username:docker}
	I0801 17:35:12.720631   30018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9cxff" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:12.753737   30018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50644 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801172918-13911/id_rsa Username:docker}
	I0801 17:35:12.783766   30018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50644 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801172918-13911/id_rsa Username:docker}
	I0801 17:35:12.944858   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:35:12.944871   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:35:12.946350   30018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:35:12.948041   30018 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:35:12.948053   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:35:13.035132   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:35:13.035149   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:35:13.037878   30018 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:35:13.037893   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:35:13.048088   30018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:35:13.128376   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:35:13.128394   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:35:13.140624   30018 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:35:13.140637   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:35:13.226657   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:35:13.226671   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:35:13.228059   30018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:35:13.241179   30018 pod_ready.go:92] pod "coredns-6d4b75cb6d-9cxff" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:13.241198   30018 pod_ready.go:81] duration metric: took 520.389417ms waiting for pod "coredns-6d4b75cb6d-9cxff" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:13.241207   30018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-shsxd" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:13.329185   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:35:13.329199   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:35:13.350206   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:35:13.350242   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:35:13.430373   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:35:13.430393   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:35:13.538425   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:35:13.538447   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:35:13.627865   30018 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.279116488s)
	I0801 17:35:13.627886   30018 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0801 17:35:13.630411   30018 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:35:13.630425   30018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:35:13.647618   30018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:35:13.958840   30018 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220801172918-13911"
	I0801 17:35:14.835963   30018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.187987488s)
	I0801 17:35:14.877697   30018 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:35:14.936943   30018 addons.go:414] enableAddons completed in 2.769096482s
	I0801 17:35:15.255000   30018 pod_ready.go:102] pod "coredns-6d4b75cb6d-shsxd" in "kube-system" namespace has status "Ready":"False"
	I0801 17:35:16.257299   30018 pod_ready.go:92] pod "coredns-6d4b75cb6d-shsxd" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.257312   30018 pod_ready.go:81] duration metric: took 3.015307872s waiting for pod "coredns-6d4b75cb6d-shsxd" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.257320   30018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.261093   30018 pod_ready.go:92] pod "etcd-embed-certs-20220801172918-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.261101   30018 pod_ready.go:81] duration metric: took 3.774633ms waiting for pod "etcd-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.261106   30018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.265310   30018 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220801172918-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.265319   30018 pod_ready.go:81] duration metric: took 4.206335ms waiting for pod "kube-apiserver-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.265324   30018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.269346   30018 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220801172918-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.269354   30018 pod_ready.go:81] duration metric: took 4.01792ms waiting for pod "kube-controller-manager-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.269360   30018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x9k7x" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.273431   30018 pod_ready.go:92] pod "kube-proxy-x9k7x" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.273438   30018 pod_ready.go:81] duration metric: took 4.073268ms waiting for pod "kube-proxy-x9k7x" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.273444   30018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.653610   30018 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220801172918-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:35:16.653620   30018 pod_ready.go:81] duration metric: took 380.082522ms waiting for pod "kube-scheduler-embed-certs-20220801172918-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:35:16.653625   30018 pod_ready.go:38] duration metric: took 3.998116209s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:35:16.653640   30018 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:35:16.653687   30018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:16.665575   30018 api_server.go:71] duration metric: took 4.497341647s to wait for apiserver process to appear ...
	I0801 17:35:16.665593   30018 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:35:16.665602   30018 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50648/healthz ...
	I0801 17:35:16.671952   30018 api_server.go:266] https://127.0.0.1:50648/healthz returned 200:
	ok
	I0801 17:35:16.673212   30018 api_server.go:140] control plane version: v1.24.3
	I0801 17:35:16.673221   30018 api_server.go:130] duration metric: took 7.622383ms to wait for apiserver health ...
	I0801 17:35:16.673226   30018 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:35:13.874861   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064395737s)
	I0801 17:35:16.376183   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:16.399312   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:16.430262   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.430275   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:16.430337   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:16.460017   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.460034   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:16.460093   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:16.491848   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.491860   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:16.491920   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:16.521940   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.521955   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:16.522015   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:16.551494   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.551507   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:16.551567   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:16.582166   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.582182   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:16.582246   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:16.613564   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.613577   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:16.613646   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:16.642889   30307 logs.go:274] 0 containers: []
	W0801 17:35:16.642902   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:16.642909   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:16.642916   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:16.705324   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:16.705334   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:16.705340   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:16.719372   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:16.719385   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:16.858133   30018 system_pods.go:59] 9 kube-system pods found
	I0801 17:35:16.858146   30018 system_pods.go:61] "coredns-6d4b75cb6d-9cxff" [3a5893dd-8ee8-436b-bca5-8c49d6224160] Running
	I0801 17:35:16.858150   30018 system_pods.go:61] "coredns-6d4b75cb6d-shsxd" [98813c90-a6e9-4120-9d54-057e7f340516] Running
	I0801 17:35:16.858153   30018 system_pods.go:61] "etcd-embed-certs-20220801172918-13911" [370a0346-c668-4e31-ad3b-6ae311038f95] Running
	I0801 17:35:16.858157   30018 system_pods.go:61] "kube-apiserver-embed-certs-20220801172918-13911" [16705423-1902-408d-bf96-c429bb0b369a] Running
	I0801 17:35:16.858173   30018 system_pods.go:61] "kube-controller-manager-embed-certs-20220801172918-13911" [18063908-5ab2-4a2e-8466-3d65005d104e] Running
	I0801 17:35:16.858180   30018 system_pods.go:61] "kube-proxy-x9k7x" [b4af731a-19c9-4ba9-ab8f-fe20332332d4] Running
	I0801 17:35:16.858188   30018 system_pods.go:61] "kube-scheduler-embed-certs-20220801172918-13911" [cd151a9c-b351-42c1-969b-0f19b6b82b41] Running
	I0801 17:35:16.858198   30018 system_pods.go:61] "metrics-server-5c6f97fb75-ssb94" [07af04bc-f4e5-4715-9a1d-b60f73f55288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:35:16.858206   30018 system_pods.go:61] "storage-provisioner" [4f72400e-5fc3-406e-b35b-742f9cd4d378] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:35:16.858210   30018 system_pods.go:74] duration metric: took 184.937881ms to wait for pod list to return data ...
	I0801 17:35:16.858216   30018 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:35:17.054360   30018 default_sa.go:45] found service account: "default"
	I0801 17:35:17.054375   30018 default_sa.go:55] duration metric: took 196.108679ms for default service account to be created ...
	I0801 17:35:17.054386   30018 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 17:35:17.260263   30018 system_pods.go:86] 9 kube-system pods found
	I0801 17:35:17.260281   30018 system_pods.go:89] "coredns-6d4b75cb6d-9cxff" [3a5893dd-8ee8-436b-bca5-8c49d6224160] Running
	I0801 17:35:17.260286   30018 system_pods.go:89] "coredns-6d4b75cb6d-shsxd" [98813c90-a6e9-4120-9d54-057e7f340516] Running
	I0801 17:35:17.260290   30018 system_pods.go:89] "etcd-embed-certs-20220801172918-13911" [370a0346-c668-4e31-ad3b-6ae311038f95] Running
	I0801 17:35:17.260294   30018 system_pods.go:89] "kube-apiserver-embed-certs-20220801172918-13911" [16705423-1902-408d-bf96-c429bb0b369a] Running
	I0801 17:35:17.260300   30018 system_pods.go:89] "kube-controller-manager-embed-certs-20220801172918-13911" [18063908-5ab2-4a2e-8466-3d65005d104e] Running
	I0801 17:35:17.260306   30018 system_pods.go:89] "kube-proxy-x9k7x" [b4af731a-19c9-4ba9-ab8f-fe20332332d4] Running
	I0801 17:35:17.260315   30018 system_pods.go:89] "kube-scheduler-embed-certs-20220801172918-13911" [cd151a9c-b351-42c1-969b-0f19b6b82b41] Running
	I0801 17:35:17.260331   30018 system_pods.go:89] "metrics-server-5c6f97fb75-ssb94" [07af04bc-f4e5-4715-9a1d-b60f73f55288] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:35:17.260342   30018 system_pods.go:89] "storage-provisioner" [4f72400e-5fc3-406e-b35b-742f9cd4d378] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:35:17.260351   30018 system_pods.go:126] duration metric: took 205.899962ms to wait for k8s-apps to be running ...
	I0801 17:35:17.260367   30018 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 17:35:17.260425   30018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:35:17.275620   30018 system_svc.go:56] duration metric: took 15.249277ms WaitForService to wait for kubelet.
	I0801 17:35:17.275640   30018 kubeadm.go:572] duration metric: took 5.107267858s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 17:35:17.275656   30018 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:35:17.454892   30018 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:35:17.454918   30018 node_conditions.go:123] node cpu capacity is 6
	I0801 17:35:17.454929   30018 node_conditions.go:105] duration metric: took 179.230034ms to run NodePressure ...
	I0801 17:35:17.454950   30018 start.go:216] waiting for startup goroutines ...
	I0801 17:35:17.495749   30018 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:35:17.517809   30018 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220801172918-13911" cluster and "default" namespace by default
	I0801 17:35:18.776640   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056791977s)
	I0801 17:35:18.776769   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:18.776779   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:18.825208   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:18.825237   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:21.339622   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:21.400203   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:21.433513   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.433525   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:21.433585   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:21.479281   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.479293   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:21.479351   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:21.528053   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.528075   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:21.528152   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:21.570823   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.570842   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:21.570914   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:21.622051   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.622066   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:21.622120   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:21.662421   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.662433   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:21.662494   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:21.700986   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.701004   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:21.701071   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:21.761715   30307 logs.go:274] 0 containers: []
	W0801 17:35:21.761733   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:21.761744   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:21.761754   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:21.812508   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:21.812527   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:21.829925   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:21.829963   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:21.894716   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:21.894731   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:21.894740   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:21.915852   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:21.915872   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:23.988264   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.072039463s)
	I0801 17:35:26.488923   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:26.902539   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:26.934000   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.934013   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:26.934097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:26.962321   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.962333   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:26.962392   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:26.991695   30307 logs.go:274] 0 containers: []
	W0801 17:35:26.991707   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:26.991767   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:27.019837   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.019849   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:27.019909   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:27.049346   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.049358   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:27.049416   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:27.078615   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.078626   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:27.078682   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:27.107692   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.107705   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:27.107764   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:27.135696   30307 logs.go:274] 0 containers: []
	W0801 17:35:27.135711   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:27.135718   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:27.135726   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:27.179734   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:27.179751   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:27.192465   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:27.192482   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:27.246895   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:27.246908   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:27.246915   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:27.260599   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:27.260611   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:29.314532   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053665084s)
	I0801 17:35:31.815083   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:31.903100   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:31.934197   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.934208   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:31.934264   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:31.963017   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.963028   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:31.963086   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:31.993025   30307 logs.go:274] 0 containers: []
	W0801 17:35:31.993039   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:31.993098   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:32.022103   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.022116   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:32.022174   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:32.051243   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.051255   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:32.051310   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:32.081226   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.081238   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:32.081294   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:32.110522   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.110535   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:32.110593   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:32.139913   30307 logs.go:274] 0 containers: []
	W0801 17:35:32.139927   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:32.139935   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:32.139943   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:32.181780   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:32.181796   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:32.194244   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:32.194258   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:32.244454   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:32.244465   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:32.244472   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:32.258059   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:32.258071   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:34.313901   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05563403s)
	I0801 17:35:36.814353   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:36.902028   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:36.932525   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.932537   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:36.932595   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:36.965941   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.965952   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:36.966010   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:36.997194   30307 logs.go:274] 0 containers: []
	W0801 17:35:36.997206   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:36.997265   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:37.027992   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.028004   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:37.028058   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:37.057894   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.057906   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:37.057963   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:37.091455   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.091467   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:37.091527   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:37.127099   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.127112   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:37.127168   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:37.164814   30307 logs.go:274] 0 containers: []
	W0801 17:35:37.216228   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:37.216316   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:37.216333   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:37.259473   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:37.259490   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:37.271319   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:37.271338   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:37.326930   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:37.326944   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:37.326956   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:37.342336   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:37.342350   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:39.395576   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053071303s)
	I0801 17:35:41.898084   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:42.402026   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:42.434886   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.434900   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:42.434955   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:42.464377   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.464389   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:42.464445   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:42.492747   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.492759   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:42.492818   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:42.521139   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.521153   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:42.521209   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:42.550296   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.550307   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:42.550363   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:42.579268   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.579281   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:42.579338   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:42.608287   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.608299   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:42.608352   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:42.637135   30307 logs.go:274] 0 containers: []
	W0801 17:35:42.637150   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:42.637163   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:42.637175   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:42.650659   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:42.650670   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:44.706919   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056125086s)
	I0801 17:35:44.707024   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:44.707030   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:44.746683   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:44.746696   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:44.757796   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:44.757808   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:44.810488   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:47.311546   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:47.402199   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:47.431779   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.431796   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:47.431878   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:47.462490   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.462504   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:47.462563   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:47.491434   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.491447   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:47.491504   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:47.520881   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.520894   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:47.520968   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:47.550517   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.550529   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:47.550584   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:47.580190   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.580205   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:47.580261   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:47.608687   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.608698   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:47.608757   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:47.638031   30307 logs.go:274] 0 containers: []
	W0801 17:35:47.638044   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:47.638051   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:47.638057   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:47.649363   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:47.649376   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:47.701537   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:47.701547   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:47.701554   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:47.714906   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:47.714918   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:49.767687   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052668978s)
	I0801 17:35:49.767793   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:49.767799   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:52.306896   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:52.404357   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:52.435516   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.435528   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:52.435587   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:52.466505   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.466517   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:52.466576   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:52.495280   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.495292   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:52.495351   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:52.523452   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.523464   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:52.523522   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:52.552296   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.552308   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:52.552367   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:52.582614   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.582628   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:52.582686   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:52.611494   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.611510   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:52.611571   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:52.643062   30307 logs.go:274] 0 containers: []
	W0801 17:35:52.643073   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:52.643081   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:52.643088   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:52.683875   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:52.683894   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:52.696292   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:52.696306   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:52.751367   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:52.751385   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:52.751398   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:52.764882   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:52.764895   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:54.823481   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058501379s)
	I0801 17:35:57.325623   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:35:57.404554   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:35:57.435795   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.435806   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:35:57.435864   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:35:57.464534   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.464547   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:35:57.464609   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:35:57.493563   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.493576   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:35:57.493631   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:35:57.521806   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.521818   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:35:57.521876   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:35:57.550038   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.550052   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:35:57.550128   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:35:57.584225   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.584251   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:35:57.584312   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:35:57.613276   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.613289   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:35:57.613348   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:35:57.641915   30307 logs.go:274] 0 containers: []
	W0801 17:35:57.641927   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:35:57.641934   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:35:57.641942   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:35:57.681293   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:35:57.681305   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:35:57.692507   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:35:57.692519   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:35:57.744366   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:35:57.744377   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:35:57.744384   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:35:57.758258   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:35:57.758270   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:35:59.813771   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055426001s)
	I0801 17:36:02.314786   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:02.403097   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:02.432291   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.432303   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:02.432366   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:02.462408   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.462420   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:02.462478   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:02.491149   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.491167   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:02.491224   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:02.519302   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.519315   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:02.519372   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:02.548267   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.548281   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:02.548342   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:02.576524   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.576538   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:02.576595   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:02.605216   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.605228   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:02.605287   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:02.634873   30307 logs.go:274] 0 containers: []
	W0801 17:36:02.634885   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:02.634892   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:02.634902   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:02.648952   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:02.648965   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:04.701091   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052060777s)
	I0801 17:36:04.701205   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:04.701212   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:04.740173   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:04.740190   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:04.751825   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:04.751838   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:04.803705   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:07.306022   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:07.404847   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:07.435505   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.435517   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:07.435573   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:07.463625   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.463637   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:07.463694   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:07.491535   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.491547   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:07.491610   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:07.520843   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.520855   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:07.520914   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:07.549909   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.549922   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:07.549979   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:07.578735   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.578749   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:07.578812   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:07.609291   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.609304   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:07.609360   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:07.638717   30307 logs.go:274] 0 containers: []
	W0801 17:36:07.638731   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:07.638739   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:07.638746   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:07.650180   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:07.650194   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:07.708994   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:07.709004   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:07.709011   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:07.722398   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:07.722410   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:09.776740   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054270118s)
	I0801 17:36:09.776854   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:09.776862   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:12.317307   30307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:36:12.402938   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:36:12.442523   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.442537   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:36:12.442601   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:36:12.470774   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.470787   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:36:12.470855   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:36:12.508498   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.508512   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:36:12.508573   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:36:12.542161   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.542174   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:36:12.542230   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:36:12.570767   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.570782   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:36:12.570844   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:36:12.610930   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.610948   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:36:12.610993   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:36:12.647001   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.647013   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:36:12.647065   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:36:12.688997   30307 logs.go:274] 0 containers: []
	W0801 17:36:12.689014   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:36:12.689023   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:36:12.689035   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:36:12.739740   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:36:12.739761   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:36:12.753725   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:36:12.753746   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:36:12.840901   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:36:12.840915   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:36:12.840923   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0801 17:36:12.855530   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:36:12.855545   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:36:14.911131   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055529458s)
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:30:28 UTC, end at Tue 2022-08-02 00:36:19 UTC. --
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.532070377Z" level=info msg="ignoring event" container=a8da684b35304f3e02c9af9174e6cd8273f50961cbd3510fa95feaa0b0ef11da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.601971814Z" level=info msg="ignoring event" container=1673a38e152c3c347b2a1d6111ad96c81b4ecd7489d60918795a16a66f6f8184 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.680608258Z" level=info msg="ignoring event" container=1e2a1561550fd9425759cb5d62096c799bb6d8d07766d0baa078a833e7840f02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.752343636Z" level=info msg="ignoring event" container=d8e0058ff4097f002da86cda4b0d201e903ecdf31e4c4c5887af2c0ffef14c89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.867065825Z" level=info msg="ignoring event" container=68e502f0033cb39346e7c3b665f5e295c0f6aed47243cf0feee2a94978e1f42c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:49 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:49.935003880Z" level=info msg="ignoring event" container=0e6dbe63e82dbc9579d26f29de928d47583a733aaae2f275dc7fe74ac0b7175f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:34:50 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:34:50.023355144Z" level=info msg="ignoring event" container=02237302c9e424373e4cedf08288baefb7b510b9ab354c2efddf7ec54a1e6032 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:14 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:14.564908110Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:14 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:14.565003761Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:14 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:14.566326202Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:16 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:16.305662800Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:35:16 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:16.600730986Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:35:19 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:19.374987682Z" level=info msg="ignoring event" container=40bc6ff544632d13729d6a699345b16ed533c8707457996a49d84f2354f55b04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:19 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:19.558597794Z" level=info msg="ignoring event" container=d18306f0dc48f3c1c5a1278d2211b23770e50ee329c7cd601b5d1f50e38a6773 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:19 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:19.962066991Z" level=info msg="ignoring event" container=537070bd27fe914e34196cb9e7ccee69444b543bb90b8e7cd4c17f2d6544f797 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:20 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:20.121317516Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Aug 02 00:35:20 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:20.852763154Z" level=info msg="ignoring event" container=93c2501241449b2ba013b38a8d7fa6af1623b332eb5fda06c7bd25a4d777c2b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:35:28 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:28.764716042Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:28 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:28.765068304Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:28 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:28.766221735Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:35:37 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:35:37.455852516Z" level=info msg="ignoring event" container=bc1913d66d5a68da924b860e256788e3d479d9ae2613c786d0fa96008de3dbcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:36:16 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:36:16.790711770Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:36:16 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:36:16.790756169Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:36:16 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:36:16.793084963Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:36:16 embed-certs-20220801172918-13911 dockerd[544]: time="2022-08-02T00:36:16.929518641Z" level=info msg="ignoring event" container=aff51826cbc70bb1e9f19d9145aea554c8658c9c12fd0e32e1cbc59631243371 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	aff51826cbc70       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   3                   9ecf70971fc58
	85e487f4c7c0b       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   55 seconds ago       Running             kubernetes-dashboard        0                   a1466bd874fca
	41ced93d20477       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   6d9cc4db534c0
	d1dbd2b7715b3       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   034f4758d0a35
	af9849a5e63b0       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   e0dc213ab39ca
	b3ff7d2aea220       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   5ec897f807247
	6ca99271de288       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   c2943a4f5c988
	4d084d04150f4       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   43ae96b09d011
	c3f4ade928adb       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   3235c8a5c24ab
	
	* 
	* ==> coredns [af9849a5e63b] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220801172918-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220801172918-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=embed-certs-20220801172918-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_34_58_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:34:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220801172918-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:36:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:36:12 +0000   Tue, 02 Aug 2022 00:36:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:36:12 +0000   Tue, 02 Aug 2022 00:36:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:36:12 +0000   Tue, 02 Aug 2022 00:36:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Aug 2022 00:36:12 +0000   Tue, 02 Aug 2022 00:36:12 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220801172918-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                6be68503-085a-4635-9350-f578be5c27e0
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9cxff                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     68s
	  kube-system                 etcd-embed-certs-20220801172918-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         81s
	  kube-system                 kube-apiserver-embed-certs-20220801172918-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-embed-certs-20220801172918-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-x9k7x                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-embed-certs-20220801172918-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 metrics-server-5c6f97fb75-ssb94                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         66s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-vmfnk                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-8fcx8                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 66s              kube-proxy       
	  Normal  NodeReady                81s              kubelet          Node embed-certs-20220801172918-13911 status is now: NodeReady
	  Normal  Starting                 81s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  81s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  81s              kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s              kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s              kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           68s              node-controller  Node embed-certs-20220801172918-13911 event: Registered Node embed-certs-20220801172918-13911 in Controller
	  Normal  NodeNotReady             7s               node-controller  Node embed-certs-20220801172918-13911 status is now: NodeNotReady
	  Normal  Starting                 7s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s (x2 over 7s)  kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x2 over 7s)  kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x2 over 7s)  kubelet          Node embed-certs-20220801172918-13911 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s               kubelet          Node embed-certs-20220801172918-13911 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [4d084d04150f] <==
	* {"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:34:53.021Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-08-02T00:34:53.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220801172918-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:34:53.420Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:34:53.414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:34:53.420Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:34:53.421Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:34:53.421Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:34:53.421Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:34:53.421Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2022-08-02T00:36:19.810Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"128.276015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-08-02T00:36:19.810Z","caller":"traceutil/trace.go:171","msg":"trace[1491340193] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:622; }","duration":"128.529901ms","start":"2022-08-02T00:36:19.681Z","end":"2022-08-02T00:36:19.810Z","steps":["trace[1491340193] 'agreement among raft nodes before linearized reading'  (duration: 53.190929ms)","trace[1491340193] 'range keys from in-memory index tree'  (duration: 75.066689ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  00:36:20 up  1:01,  0 users,  load average: 0.87, 1.05, 1.11
	Linux embed-certs-20220801172918-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c3f4ade928ad] <==
	* I0802 00:34:57.801876       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:34:58.567636       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:34:58.572879       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0802 00:34:58.581249       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:34:58.663627       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:35:11.465798       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0802 00:35:11.481781       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0802 00:35:13.734945       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:35:13.949702       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.103.102.61]
	W0802 00:35:14.755759       1 handler_proxy.go:102] no RequestInfo found in the context
	W0802 00:35:14.755870       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:35:14.755907       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:35:14.755914       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0802 00:35:14.756053       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:35:14.756958       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0802 00:35:14.764192       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.83.44]
	I0802 00:35:14.830459       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.99.89.46]
	W0802 00:36:14.720251       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:36:14.720329       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:36:14.720337       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:36:14.721427       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:36:14.721480       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:36:14.721486       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6ca99271de28] <==
	* I0802 00:35:14.673901       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:35:14.679001       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:35:14.679149       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:35:14.679180       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:35:14.719983       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:35:14.723172       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:35:14.723171       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:35:14.723188       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:35:14.723220       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:35:14.730814       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:35:14.730858       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:35:14.755676       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-8fcx8"
	I0802 00:35:14.768883       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-vmfnk"
	E0802 00:36:12.308816       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0802 00:36:12.317342       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0802 00:36:12.385329       1 event.go:294] "Event occurred" object="embed-certs-20220801172918-13911" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node embed-certs-20220801172918-13911 status is now: NodeNotReady"
	I0802 00:36:12.404794       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-embed-certs-20220801172918-13911" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.409059       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d-9cxff" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.415377       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.422122       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-8fcx8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.437592       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-x9k7x" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.485571       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-embed-certs-20220801172918-13911" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.490756       1 event.go:294] "Event occurred" object="kube-system/etcd-embed-certs-20220801172918-13911" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0802 00:36:12.500049       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0802 00:36:12.500155       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-embed-certs-20220801172918-13911" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [d1dbd2b7715b] <==
	* I0802 00:35:13.641118       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:35:13.641167       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:35:13.641229       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:35:13.731464       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:35:13.731514       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:35:13.731523       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:35:13.731532       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:35:13.731554       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:35:13.731719       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:35:13.732067       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:35:13.732074       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:35:13.733051       1 config.go:317] "Starting service config controller"
	I0802 00:35:13.733063       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:35:13.733077       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:35:13.733079       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:35:13.733629       1 config.go:444] "Starting node config controller"
	I0802 00:35:13.733637       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:35:13.833450       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 00:35:13.833539       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:35:13.835253       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b3ff7d2aea22] <==
	* W0802 00:34:55.740483       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 00:34:55.740495       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 00:34:55.740743       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 00:34:55.740824       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 00:34:55.740755       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 00:34:55.740936       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 00:34:55.741015       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 00:34:55.741047       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 00:34:55.741052       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 00:34:55.741062       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 00:34:56.629207       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:34:56.629244       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:34:56.633374       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 00:34:56.633408       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 00:34:56.641993       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:34:56.642026       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 00:34:56.645302       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 00:34:56.645338       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 00:34:56.739131       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 00:34:56.739149       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 00:34:56.846700       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 00:34:56.846741       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 00:34:56.893217       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 00:34:56.893254       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0802 00:34:59.037069       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:30:28 UTC, end at Tue 2022-08-02 00:36:20 UTC. --
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091140    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/07af04bc-f4e5-4715-9a1d-b60f73f55288-tmp-dir\") pod \"metrics-server-5c6f97fb75-ssb94\" (UID: \"07af04bc-f4e5-4715-9a1d-b60f73f55288\") " pod="kube-system/metrics-server-5c6f97fb75-ssb94"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091195    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qztlv\" (UniqueName: \"kubernetes.io/projected/4f72400e-5fc3-406e-b35b-742f9cd4d378-kube-api-access-qztlv\") pod \"storage-provisioner\" (UID: \"4f72400e-5fc3-406e-b35b-742f9cd4d378\") " pod="kube-system/storage-provisioner"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091249    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4af731a-19c9-4ba9-ab8f-fe20332332d4-kube-proxy\") pod \"kube-proxy-x9k7x\" (UID: \"b4af731a-19c9-4ba9-ab8f-fe20332332d4\") " pod="kube-system/kube-proxy-x9k7x"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091299    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdczj\" (UniqueName: \"kubernetes.io/projected/6466af83-b5c4-4761-b138-0b5c803c81fd-kube-api-access-jdczj\") pod \"dashboard-metrics-scraper-dffd48c4c-vmfnk\" (UID: \"6466af83-b5c4-4761-b138-0b5c803c81fd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vmfnk"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091336    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphb7\" (UniqueName: \"kubernetes.io/projected/0d867994-9c56-41dc-9234-3dd9bbe748ef-kube-api-access-jphb7\") pod \"kubernetes-dashboard-5fd5574d9f-8fcx8\" (UID: \"0d867994-9c56-41dc-9234-3dd9bbe748ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-8fcx8"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091358    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcf88\" (UniqueName: \"kubernetes.io/projected/07af04bc-f4e5-4715-9a1d-b60f73f55288-kube-api-access-jcf88\") pod \"metrics-server-5c6f97fb75-ssb94\" (UID: \"07af04bc-f4e5-4715-9a1d-b60f73f55288\") " pod="kube-system/metrics-server-5c6f97fb75-ssb94"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091373    9804 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f72400e-5fc3-406e-b35b-742f9cd4d378-tmp\") pod \"storage-provisioner\" (UID: \"4f72400e-5fc3-406e-b35b-742f9cd4d378\") " pod="kube-system/storage-provisioner"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:14.091395    9804 reconciler.go:157] "Reconciler: start to sync state"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:14.289401    9804 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220801172918-13911\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220801172918-13911"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:14.656946    9804 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220801172918-13911\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220801172918-13911"
	Aug 02 00:36:14 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:14.855674    9804 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220801172918-13911\" already exists" pod="kube-system/etcd-embed-certs-20220801172918-13911"
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:15.053384    9804 request.go:601] Waited for 1.05075005s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.116757    9804 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220801172918-13911\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220801172918-13911"
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.194384    9804 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.194468    9804 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3a5893dd-8ee8-436b-bca5-8c49d6224160-config-volume podName:3a5893dd-8ee8-436b-bca5-8c49d6224160 nodeName:}" failed. No retries permitted until 2022-08-02 00:36:15.694452676 +0000 UTC m=+3.174427119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3a5893dd-8ee8-436b-bca5-8c49d6224160-config-volume") pod "coredns-6d4b75cb6d-9cxff" (UID: "3a5893dd-8ee8-436b-bca5-8c49d6224160") : failed to sync configmap cache: timed out waiting for the condition
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.194623    9804 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Aug 02 00:36:15 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:15.194738    9804 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b4af731a-19c9-4ba9-ab8f-fe20332332d4-kube-proxy podName:b4af731a-19c9-4ba9-ab8f-fe20332332d4 nodeName:}" failed. No retries permitted until 2022-08-02 00:36:15.694726311 +0000 UTC m=+3.174700755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b4af731a-19c9-4ba9-ab8f-fe20332332d4-kube-proxy") pod "kube-proxy-x9k7x" (UID: "b4af731a-19c9-4ba9-ab8f-fe20332332d4") : failed to sync configmap cache: timed out waiting for the condition
	Aug 02 00:36:16 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:16.755993    9804 scope.go:110] "RemoveContainer" containerID="bc1913d66d5a68da924b860e256788e3d479d9ae2613c786d0fa96008de3dbcd"
	Aug 02 00:36:16 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:16.793600    9804 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 02 00:36:16 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:16.793665    9804 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 02 00:36:16 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:16.793820    9804 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jcf88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-ssb94_kube-system(07af04bc-f4e5-4715-9a1d-b60f73f55288): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Aug 02 00:36:16 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:16.793870    9804 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-ssb94" podUID=07af04bc-f4e5-4715-9a1d-b60f73f55288
	Aug 02 00:36:17 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:17.026386    9804 scope.go:110] "RemoveContainer" containerID="bc1913d66d5a68da924b860e256788e3d479d9ae2613c786d0fa96008de3dbcd"
	Aug 02 00:36:17 embed-certs-20220801172918-13911 kubelet[9804]: I0802 00:36:17.027067    9804 scope.go:110] "RemoveContainer" containerID="aff51826cbc70bb1e9f19d9145aea554c8658c9c12fd0e32e1cbc59631243371"
	Aug 02 00:36:17 embed-certs-20220801172918-13911 kubelet[9804]: E0802 00:36:17.027288    9804 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-vmfnk_kubernetes-dashboard(6466af83-b5c4-4761-b138-0b5c803c81fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vmfnk" podUID=6466af83-b5c4-4761-b138-0b5c803c81fd
	
	* 
	* ==> kubernetes-dashboard [85e487f4c7c0] <==
	* 2022/08/02 00:35:25 Using namespace: kubernetes-dashboard
	2022/08/02 00:35:25 Using in-cluster config to connect to apiserver
	2022/08/02 00:35:25 Using secret token for csrf signing
	2022/08/02 00:35:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/08/02 00:35:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/08/02 00:35:25 Successful initial request to the apiserver, version: v1.24.3
	2022/08/02 00:35:25 Generating JWE encryption key
	2022/08/02 00:35:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/08/02 00:35:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/08/02 00:35:25 Initializing JWE encryption key from synchronized object
	2022/08/02 00:35:25 Creating in-cluster Sidecar client
	2022/08/02 00:35:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:35:25 Serving insecurely on HTTP port: 9090
	2022/08/02 00:36:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:35:25 Starting overwatch
	
	* 
	* ==> storage-provisioner [41ced93d2047] <==
	* I0802 00:35:14.672021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 00:35:14.724512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 00:35:14.725138       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 00:35:14.734508       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 00:35:14.734746       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220801172918-13911_8377462b-da99-49cf-8410-3e85e4e99b51!
	I0802 00:35:14.734744       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"52ace0d2-f308-4ead-b9ec-29e0d77bdfe0", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220801172918-13911_8377462b-da99-49cf-8410-3e85e4e99b51 became leader
	I0802 00:35:14.835860       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220801172918-13911_8377462b-da99-49cf-8410-3e85e4e99b51!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220801172918-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-ssb94
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220801172918-13911 describe pod metrics-server-5c6f97fb75-ssb94
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220801172918-13911 describe pod metrics-server-5c6f97fb75-ssb94: exit status 1 (287.070869ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-ssb94" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220801172918-13911 describe pod metrics-server-5c6f97fb75-ssb94: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (43.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:41:40.574863   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:42:01.334268   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:42:02.193357   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:42:37.560043   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 17:42:39.196956   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:43:03.624247   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:43:04.520088   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:44:13.438232   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:45:17.050276   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:45:54.820791   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:46:06.448210   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:46:40.100143   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:46:40.579927   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:47:01.339786   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:47:02.198688   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:47:17.869677   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:47:23.202180   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:23.207692   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:23.218639   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:23.239178   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:23.279525   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:23.359723   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:23.520738   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:23.843081   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:24.483210   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:25.765374   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:47:28.325800   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:33.448219   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:47:37.566614   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:47:39.202068   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 17:47:43.690737   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:47:50.386355   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:48:04.171175   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:48:04.523166   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:48:15.124526   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:48:24.447792   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:48:25.291759   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:48:45.132784   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:49:00.616626   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:49:27.574581   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:49:43.400296   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:50:17.052962   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 2 (440.103499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-20220801172716-13911" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220801172716-13911
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220801172716-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6",
	        "Created": "2022-08-02T00:27:24.523444703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246645,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:33:03.548358911Z",
	            "FinishedAt": "2022-08-02T00:33:00.53307201Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hosts",
	        "LogPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6-json.log",
	        "Name": "/old-k8s-version-20220801172716-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220801172716-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220801172716-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220801172716-13911",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220801172716-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220801172716-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7033b72c7cb5dd94daf6f66da715470e46ad00b0bd6f037aa3061302fc290971",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50784"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50785"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50787"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50783"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7033b72c7cb5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220801172716-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dfb69a533565",
	                        "old-k8s-version-20220801172716-13911"
	                    ],
	                    "NetworkID": "947fc21b2e0fc27b09dd4dd43b477927d08a61d441a541fee2a6fa712bca71b9",
	                    "EndpointID": "a3b831dd7b0090943b49fd33eab9fa69501e40c1e99428d6b52499a1a33c63e3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 2 (441.328871ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220801172716-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220801172716-13911 logs -n 25: (3.495715435s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                               | old-k8s-version-20220801172716-13911            | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911            | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220801173625-13911      | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | disable-driver-mounts-20220801173625-13911        |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:42 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:45:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:45:07.234304   31913 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:45:07.234495   31913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:45:07.234500   31913 out.go:309] Setting ErrFile to fd 2...
	I0801 17:45:07.234506   31913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:45:07.234609   31913 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:45:07.235111   31913 out.go:303] Setting JSON to false
	I0801 17:45:07.250217   31913 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9878,"bootTime":1659391229,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:45:07.250344   31913 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:45:07.272605   31913 out.go:177] * [default-k8s-different-port-20220801174348-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:45:07.294231   31913 notify.go:193] Checking for updates...
	I0801 17:45:07.316180   31913 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:45:07.337992   31913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:45:07.359246   31913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:45:07.380136   31913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:45:07.401417   31913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:45:07.423835   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:45:07.424579   31913 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:45:07.493796   31913 docker.go:137] docker version: linux-20.10.17
	I0801 17:45:07.493922   31913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:45:07.627933   31913 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:45:07.572823528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:45:07.671605   31913 out.go:177] * Using the docker driver based on existing profile
	I0801 17:45:07.693541   31913 start.go:284] selected driver: docker
	I0801 17:45:07.693586   31913 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port
-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:07.693713   31913 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:45:07.697078   31913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:45:07.829506   31913 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:45:07.755017741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:45:07.829656   31913 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:45:07.829672   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:07.829681   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:07.829693   31913 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:07.873313   31913 out.go:177] * Starting control plane node default-k8s-different-port-20220801174348-13911 in cluster default-k8s-different-port-20220801174348-13911
	I0801 17:45:07.894272   31913 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:45:07.916207   31913 out.go:177] * Pulling base image ...
	I0801 17:45:07.958286   31913 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:45:07.958311   31913 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:45:07.958367   31913 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 17:45:07.958398   31913 cache.go:57] Caching tarball of preloaded images
	I0801 17:45:07.958586   31913 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:45:07.958621   31913 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0801 17:45:07.959565   31913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/config.json ...
	I0801 17:45:08.023522   31913 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:45:08.023554   31913 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:45:08.023592   31913 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:45:08.023643   31913 start.go:371] acquiring machines lock for default-k8s-different-port-20220801174348-13911: {Name:mkf36bcbf3258128efc6b862fc1634fd58cb6b31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:45:08.023718   31913 start.go:375] acquired machines lock for "default-k8s-different-port-20220801174348-13911" in 52.949µs
	I0801 17:45:08.023737   31913 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:45:08.023747   31913 fix.go:55] fixHost starting: 
	I0801 17:45:08.023973   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:45:08.091536   31913 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220801174348-13911: state=Stopped err=<nil>
	W0801 17:45:08.091569   31913 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:45:08.135438   31913 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220801174348-13911" ...
	I0801 17:45:08.157366   31913 cli_runner.go:164] Run: docker start default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.512780   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:45:08.585392   31913 kic.go:415] container "default-k8s-different-port-20220801174348-13911" state is running.
	I0801 17:45:08.586032   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.658898   31913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/config.json ...
	I0801 17:45:08.659358   31913 machine.go:88] provisioning docker machine ...
	I0801 17:45:08.659384   31913 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220801174348-13911"
	I0801 17:45:08.659447   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.732726   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:08.732938   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:08.732958   31913 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220801174348-13911 && echo "default-k8s-different-port-20220801174348-13911" | sudo tee /etc/hostname
	I0801 17:45:08.854995   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220801174348-13911
	
	I0801 17:45:08.855093   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.929782   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:08.929919   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:08.929937   31913 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220801174348-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220801174348-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220801174348-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:45:09.043526   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:45:09.043548   31913 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:45:09.043582   31913 ubuntu.go:177] setting up certificates
	I0801 17:45:09.043592   31913 provision.go:83] configureAuth start
	I0801 17:45:09.043662   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.122482   31913 provision.go:138] copyHostCerts
	I0801 17:45:09.122564   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:45:09.122573   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:45:09.122680   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:45:09.122870   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:45:09.122879   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:45:09.122942   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:45:09.123074   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:45:09.123082   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:45:09.123138   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:45:09.123253   31913 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220801174348-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220801174348-13911]
	I0801 17:45:09.314883   31913 provision.go:172] copyRemoteCerts
	I0801 17:45:09.314960   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:45:09.315013   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.387026   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:09.473014   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:45:09.489683   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0801 17:45:09.506040   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:45:09.522131   31913 provision.go:86] duration metric: configureAuth took 478.520974ms
	I0801 17:45:09.522145   31913 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:45:09.522300   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:45:09.522359   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.593649   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.593822   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.593832   31913 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:45:09.706218   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:45:09.706232   31913 ubuntu.go:71] root file system type: overlay
	I0801 17:45:09.706373   31913 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:45:09.706456   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.777069   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.777323   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.777370   31913 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:45:09.897154   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:45:09.897240   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.968155   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.968332   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.968348   31913 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:45:10.085677   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:45:10.085692   31913 machine.go:91] provisioned docker machine in 1.42630245s
	I0801 17:45:10.085699   31913 start.go:307] post-start starting for "default-k8s-different-port-20220801174348-13911" (driver="docker")
	I0801 17:45:10.085706   31913 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:45:10.085791   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:45:10.085838   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.157469   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.242875   31913 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:45:10.246401   31913 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:45:10.246414   31913 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:45:10.246421   31913 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:45:10.246425   31913 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:45:10.246432   31913 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:45:10.246536   31913 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:45:10.246673   31913 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:45:10.246820   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:45:10.253819   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:45:10.270599   31913 start.go:310] post-start completed in 184.88853ms
	I0801 17:45:10.270679   31913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:45:10.270725   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.341693   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.424621   31913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:45:10.428724   31913 fix.go:57] fixHost completed within 2.40494101s
	I0801 17:45:10.428734   31913 start.go:82] releasing machines lock for "default-k8s-different-port-20220801174348-13911", held for 2.404972203s
	I0801 17:45:10.428805   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.499445   31913 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:45:10.499453   31913 ssh_runner.go:195] Run: systemctl --version
	I0801 17:45:10.499510   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.499521   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.577297   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.580177   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.863075   31913 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:45:10.872943   31913 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:45:10.873004   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:45:10.884327   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:45:10.896972   31913 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:45:10.964365   31913 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:45:11.037267   31913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:45:11.105843   31913 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:45:11.334865   31913 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:45:11.408996   31913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:45:11.478637   31913 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:45:11.489256   31913 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:45:11.489322   31913 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:45:11.493283   31913 start.go:471] Will wait 60s for crictl version
	I0801 17:45:11.493327   31913 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:45:11.594433   31913 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:45:11.594501   31913 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:45:11.628725   31913 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:45:11.707948   31913 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:45:11.708167   31913 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220801174348-13911 dig +short host.docker.internal
	I0801 17:45:11.835685   31913 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:45:11.835785   31913 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:45:11.839982   31913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:45:11.849128   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:11.920457   31913 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:45:11.920518   31913 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:45:11.950489   31913 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:45:11.950505   31913 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:45:11.950592   31913 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:45:11.979888   31913 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:45:11.979908   31913 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:45:11.979982   31913 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:45:12.056792   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:12.056805   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:12.056818   31913 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:45:12.056833   31913 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220801174348-13911 NodeName:default-k8s-different-port-20220801174348-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:45:12.056925   31913 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220801174348-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:45:12.057065   31913 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220801174348-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0801 17:45:12.057131   31913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:45:12.066061   31913 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:45:12.066148   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:45:12.073618   31913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0801 17:45:12.087045   31913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:45:12.099457   31913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0801 17:45:12.112836   31913 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:45:12.116178   31913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:45:12.125809   31913 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911 for IP: 192.168.67.2
	I0801 17:45:12.125918   31913 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:45:12.125966   31913 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:45:12.126040   31913 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.key
	I0801 17:45:12.126618   31913 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.key.c7fa3a9e
	I0801 17:45:12.126780   31913 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.key
	I0801 17:45:12.127193   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:45:12.127456   31913 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:45:12.127470   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:45:12.127507   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:45:12.127537   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:45:12.127568   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:45:12.127653   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:45:12.128137   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:45:12.145970   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:45:12.162916   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:45:12.179232   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 17:45:12.195637   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:45:12.211954   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:45:12.228458   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:45:12.245049   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:45:12.273133   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:45:12.289297   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:45:12.305906   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:45:12.322249   31913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:45:12.334078   31913 ssh_runner.go:195] Run: openssl version
	I0801 17:45:12.338984   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:45:12.346569   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.350437   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.350479   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.355640   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:45:12.362358   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:45:12.369524   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.373094   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.373143   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.378297   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:45:12.385323   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:45:12.392716   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.396215   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.396253   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.401492   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:45:12.408598   31913 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-1391
1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:12.408692   31913 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:45:12.437555   31913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:45:12.445130   31913 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:45:12.445143   31913 kubeadm.go:626] restartCluster start
	I0801 17:45:12.445184   31913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:45:12.451625   31913 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.451684   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:12.522544   31913 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220801174348-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:45:12.522712   31913 kubeconfig.go:127] "default-k8s-different-port-20220801174348-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:45:12.523108   31913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:45:12.524240   31913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:45:12.531709   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.531764   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.539797   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.740348   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.740540   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.750680   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.941944   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.942091   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.952401   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.141761   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.141933   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.152103   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.341140   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.341291   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.351393   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.541127   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.541267   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.550653   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.741989   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.742177   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.752445   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.939964   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.940062   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.949928   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.141998   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.142136   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.152691   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.340125   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.340267   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.350279   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.541428   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.541614   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.551563   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.741132   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.741260   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.751806   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.942014   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.942215   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.952554   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.140909   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.141047   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.151515   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.339961   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.340060   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.349894   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.539967   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.540029   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.548707   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.548716   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.548755   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.556495   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.556506   31913 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:45:15.556515   31913 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:45:15.556573   31913 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:45:15.588109   31913 docker.go:443] Stopping containers: [5330cf5dab78 804bfd7a4dd6 b753d3511dd1 ec2aabab3838 a079991f7e29 56f67accc23d f4047c9cc1b3 ae0ff377c871 d505ae905c0f 76bf3aba28e0 f366c63a7d21 8f26f8c13f7f 0da89e56674b f94f6bde6263 64851a902487 66e806932a2b]
	I0801 17:45:15.588183   31913 ssh_runner.go:195] Run: docker stop 5330cf5dab78 804bfd7a4dd6 b753d3511dd1 ec2aabab3838 a079991f7e29 56f67accc23d f4047c9cc1b3 ae0ff377c871 d505ae905c0f 76bf3aba28e0 f366c63a7d21 8f26f8c13f7f 0da89e56674b f94f6bde6263 64851a902487 66e806932a2b
	I0801 17:45:15.617424   31913 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:45:15.627354   31913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:45:15.634554   31913 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Aug  2 00:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:44 /etc/kubernetes/scheduler.conf
	
	I0801 17:45:15.634603   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0801 17:45:15.641371   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0801 17:45:15.648041   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0801 17:45:15.654727   31913 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.654766   31913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:45:15.661325   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0801 17:45:15.668099   31913 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.668152   31913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:45:15.674654   31913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:45:15.681717   31913 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:45:15.681728   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:15.726589   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.500734   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.684111   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.732262   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.805126   31913 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:45:16.805184   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:17.316069   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:17.815974   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:18.316161   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:18.326717   31913 api_server.go:71] duration metric: took 1.521564045s to wait for apiserver process to appear ...
	I0801 17:45:18.326733   31913 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:45:18.326742   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:21.129223   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:45:21.129239   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:45:21.631396   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:21.638935   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:45:21.638953   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:45:22.130245   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:22.135894   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:45:22.135911   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:45:22.629735   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:22.636164   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 200:
	ok
	I0801 17:45:22.643785   31913 api_server.go:140] control plane version: v1.24.3
	I0801 17:45:22.643800   31913 api_server.go:130] duration metric: took 4.31699607s to wait for apiserver health ...
	I0801 17:45:22.643806   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:22.643812   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:22.643822   31913 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:45:22.651516   31913 system_pods.go:59] 8 kube-system pods found
	I0801 17:45:22.651532   31913 system_pods.go:61] "coredns-6d4b75cb6d-5s86p" [e4978024-d992-4fd7-bec6-1d4cb093c4c8] Running
	I0801 17:45:22.651536   31913 system_pods.go:61] "etcd-default-k8s-different-port-20220801174348-13911" [c440b48e-48d8-4933-870b-c73df0860f90] Running
	I0801 17:45:22.651540   31913 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [e4032a9b-61fb-4493-b20a-e5d8f00382a1] Running
	I0801 17:45:22.651544   31913 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [39dbe98f-51c3-43d0-bca0-2ca31da431b5] Running
	I0801 17:45:22.651554   31913 system_pods.go:61] "kube-proxy-f7zxq" [f0307046-df65-4bb4-8bce-ddf9847f3c8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:45:22.651561   31913 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [8d33bc48-5ef3-41d2-8a6c-3fc70a048090] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:45:22.651568   31913 system_pods.go:61] "metrics-server-5c6f97fb75-647p7" [c842a29c-ef57-4fdd-be7a-43b9aa1f5178] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:45:22.651574   31913 system_pods.go:61] "storage-provisioner" [1b0a55a5-6df4-4f1c-a915-748eedde2dcd] Running
	I0801 17:45:22.651577   31913 system_pods.go:74] duration metric: took 7.750651ms to wait for pod list to return data ...
	I0801 17:45:22.651584   31913 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:45:22.654773   31913 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:45:22.654788   31913 node_conditions.go:123] node cpu capacity is 6
	I0801 17:45:22.654797   31913 node_conditions.go:105] duration metric: took 3.209718ms to run NodePressure ...
	I0801 17:45:22.654815   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:22.779173   31913 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0801 17:45:22.783016   31913 kubeadm.go:777] kubelet initialised
	I0801 17:45:22.783028   31913 kubeadm.go:778] duration metric: took 3.840293ms waiting for restarted kubelet to initialise ...
	I0801 17:45:22.783039   31913 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:45:22.798314   31913 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.803583   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.803592   31913 pod_ready.go:81] duration metric: took 5.265827ms waiting for pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.803598   31913 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.807690   31913 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.807699   31913 pod_ready.go:81] duration metric: took 4.096609ms waiting for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.807705   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.812128   31913 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.812139   31913 pod_ready.go:81] duration metric: took 4.429356ms waiting for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.812147   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:23.049650   31913 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:23.049663   31913 pod_ready.go:81] duration metric: took 237.506184ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:23.049674   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f7zxq" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:25.452177   31913 pod_ready.go:102] pod "kube-proxy-f7zxq" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:25.956303   31913 pod_ready.go:92] pod "kube-proxy-f7zxq" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:25.956316   31913 pod_ready.go:81] duration metric: took 2.90659156s waiting for pod "kube-proxy-f7zxq" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:25.956321   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:27.967784   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:29.967951   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:32.469695   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:34.967596   31913 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:34.967609   31913 pod_ready.go:81] duration metric: took 9.011143978s waiting for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:34.967617   31913 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:36.980491   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:39.477994   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:41.479741   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:43.978790   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:45.979374   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:48.479882   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:50.978835   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:53.480508   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:55.980599   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:58.477821   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:00.480184   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:02.978673   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:04.979236   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:06.980442   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:09.481308   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:11.978320   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:13.981763   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:16.478127   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:18.480044   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:20.979016   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:22.979653   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:25.478419   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:27.479556   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:29.480115   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:31.980465   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:33.980626   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:36.480658   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:38.979006   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:40.981768   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:43.480457   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:45.980250   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:48.481530   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:50.978561   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:52.979664   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:54.980231   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:56.980842   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:58.982943   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:01.479828   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:03.482672   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:05.979086   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:07.982332   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:10.479005   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:12.480025   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:14.482235   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:16.979326   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:18.980589   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:21.482584   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:23.979972   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:25.983075   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:28.479805   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:30.482624   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:32.979818   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:34.980778   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:37.479655   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:39.480136   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:41.980390   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:44.483064   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:46.980525   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:48.982541   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:51.480766   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:53.982272   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:55.982766   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:58.481978   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:00.983225   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:03.481420   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:05.483218   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:07.981513   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:09.983965   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:12.482580   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:14.981866   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:16.983535   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:19.480935   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:21.483477   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:23.980733   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:25.981338   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:27.982632   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:30.482321   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:32.981657   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:34.982204   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:37.479819   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:39.482183   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:41.483395   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:43.984485   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:46.483247   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:48.484593   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:50.981872   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:52.983510   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:54.984416   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:57.482475   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:59.982066   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:01.983433   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:04.481808   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:06.483918   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:08.484566   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:10.981629   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:12.983255   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:14.983483   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:17.482018   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:19.483424   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:21.984065   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:24.482544   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:26.984666   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:29.483581   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:31.984749   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:34.484939   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:34.976048   31913 pod_ready.go:81] duration metric: took 4m0.004717233s waiting for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" ...
	E0801 17:49:34.976075   31913 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0801 17:49:34.976092   31913 pod_ready.go:38] duration metric: took 4m12.189153798s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:49:34.976210   31913 kubeadm.go:630] restartCluster took 4m22.52701004s
	W0801 17:49:34.976332   31913 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0801 17:49:34.976363   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0801 17:49:37.337570   31913 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.361154161s)
	I0801 17:49:37.337631   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:49:37.348151   31913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:49:37.356017   31913 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:49:37.356067   31913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:49:37.363491   31913 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:49:37.363525   31913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:49:37.647145   31913 out.go:204]   - Generating certificates and keys ...
	I0801 17:49:38.415463   31913 out.go:204]   - Booting up control plane ...
	I0801 17:49:44.964434   31913 out.go:204]   - Configuring RBAC rules ...
	I0801 17:49:45.340117   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:49:45.340131   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:49:45.340148   31913 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:49:45.340246   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:45.340253   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=default-k8s-different-port-20220801174348-13911 minikube.k8s.io/updated_at=2022_08_01T17_49_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:45.475719   31913 ops.go:34] apiserver oom_adj: -16
	I0801 17:49:45.475734   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:46.055191   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:46.555085   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:47.055196   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:47.555535   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:48.055243   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:48.555376   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:49.057149   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:49.556580   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:50.055221   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:50.555044   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:51.057215   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:51.555146   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:52.055363   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:52.556980   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:53.055045   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:53.555028   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:54.055942   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:54.555141   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:55.056685   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:55.555659   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:56.055753   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:56.557278   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:57.055447   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:57.555591   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.055688   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.555182   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.617518   31913 kubeadm.go:1045] duration metric: took 13.277140503s to wait for elevateKubeSystemPrivileges.
	I0801 17:49:58.617535   31913 kubeadm.go:397] StartCluster complete in 4m46.204525782s
	I0801 17:49:58.617551   31913 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:49:58.617629   31913 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:49:58.618157   31913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:49:59.134508   31913 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220801174348-13911" rescaled to 1
	I0801 17:49:59.134544   31913 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:49:59.134572   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:49:59.134599   31913 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:49:59.134722   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:49:59.173487   31913 out.go:177] * Verifying Kubernetes components...
	I0801 17:49:59.173610   31913 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.173622   31913 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247530   31913 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247534   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0801 17:49:59.247541   31913 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:49:59.173621   31913 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247569   31913 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247592   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247570   31913 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.247619   31913 addons.go:162] addon metrics-server should already be in state true
	I0801 17:49:59.226194   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 17:49:59.173631   31913 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247669   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247687   31913 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.247701   31913 addons.go:162] addon dashboard should already be in state true
	I0801 17:49:59.247734   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247986   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.248049   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.248203   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.249076   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.384922   31913 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:49:59.406550   31913 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0801 17:49:59.443540   31913 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 17:49:59.480448   31913 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:49:59.501569   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:49:59.501592   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:49:59.501594   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:49:59.501769   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.539459   31913 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:49:59.501851   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.502354   31913 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.576566   31913 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:49:59.576647   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:49:59.576660   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:49:59.576678   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.576764   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.580094   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.625680   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.681280   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.688282   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.691624   31913 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:49:59.691636   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:49:59.691686   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.777185   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.831897   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:49:59.831911   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:49:59.910096   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:49:59.910110   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:49:59.919841   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:49:59.919858   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:49:59.921501   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:49:59.933746   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:49:59.933762   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:50:00.011707   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:50:00.011724   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:50:00.031335   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:50:00.033068   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:50:00.036467   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:50:00.036480   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:50:00.116457   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:50:00.116470   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:50:00.214471   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:50:00.214492   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:50:00.326401   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:50:00.326442   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:50:00.401494   31913 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.153808764s)
	I0801 17:50:00.401493   31913 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.15390213s)
	I0801 17:50:00.401524   31913 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0801 17:50:00.401623   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:50:00.418882   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:50:00.418903   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:50:00.481810   31913 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220801174348-13911" to be "Ready" ...
	I0801 17:50:00.505662   31913 node_ready.go:49] node "default-k8s-different-port-20220801174348-13911" has status "Ready":"True"
	I0801 17:50:00.505675   31913 node_ready.go:38] duration metric: took 23.848502ms waiting for node "default-k8s-different-port-20220801174348-13911" to be "Ready" ...
	I0801 17:50:00.505683   31913 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:50:00.512490   31913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:00.540422   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:50:00.540439   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:50:00.612866   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:50:00.612881   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:50:00.637798   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:50:00.747371   31913 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220801174348-13911"
	I0801 17:50:01.390479   31913 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:50:01.432449   31913 addons.go:414] enableAddons completed in 2.297828518s
	I0801 17:50:02.527846   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:02.527860   31913 pod_ready.go:81] duration metric: took 2.01531768s waiting for pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:02.527869   31913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.540729   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.540741   31913 pod_ready.go:81] duration metric: took 2.012836849s waiting for pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.540747   31913 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.545243   31913 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.545251   31913 pod_ready.go:81] duration metric: took 4.4993ms waiting for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.545258   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.548996   31913 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.549004   31913 pod_ready.go:81] duration metric: took 3.736506ms waiting for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.549010   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.552768   31913 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.552776   31913 pod_ready.go:81] duration metric: took 3.76149ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.552782   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvn56" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.556657   31913 pod_ready.go:92] pod "kube-proxy-dvn56" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.556665   31913 pod_ready.go:81] duration metric: took 3.869516ms waiting for pod "kube-proxy-dvn56" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.556670   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.940897   31913 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.940907   31913 pod_ready.go:81] duration metric: took 384.226091ms waiting for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.940914   31913 pod_ready.go:38] duration metric: took 4.435152434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:50:04.940932   31913 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:50:04.940979   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:50:04.951301   31913 api_server.go:71] duration metric: took 5.816647694s to wait for apiserver process to appear ...
	I0801 17:50:04.951313   31913 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:50:04.951319   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:50:04.956817   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 200:
	ok
	I0801 17:50:04.958134   31913 api_server.go:140] control plane version: v1.24.3
	I0801 17:50:04.958144   31913 api_server.go:130] duration metric: took 6.826071ms to wait for apiserver health ...
	I0801 17:50:04.958149   31913 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:50:05.140334   31913 system_pods.go:59] 9 kube-system pods found
	I0801 17:50:05.140349   31913 system_pods.go:61] "coredns-6d4b75cb6d-cvnql" [9614734b-2bd7-4bbf-97b5-634cb4468393] Running
	I0801 17:50:05.140353   31913 system_pods.go:61] "coredns-6d4b75cb6d-z8jfq" [860c344e-4653-4582-ab6e-19ef7308526f] Running
	I0801 17:50:05.140357   31913 system_pods.go:61] "etcd-default-k8s-different-port-20220801174348-13911" [441c7722-6d7f-4385-b0b8-649b3f4ce6f2] Running
	I0801 17:50:05.140360   31913 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [97cf9337-b5ff-477d-b398-366aee9386c6] Running
	I0801 17:50:05.140364   31913 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [a457c03f-b47f-41b4-98f9-c117f334574f] Running
	I0801 17:50:05.140368   31913 system_pods.go:61] "kube-proxy-dvn56" [c67e035f-7889-4442-a7af-6972b0937045] Running
	I0801 17:50:05.140373   31913 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [c3505894-023d-4f91-baaa-6328dac164b8] Running
	I0801 17:50:05.140378   31913 system_pods.go:61] "metrics-server-5c6f97fb75-wzfjd" [43803567-1715-4fb4-9020-c9ac939c5e55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:50:05.140383   31913 system_pods.go:61] "storage-provisioner" [1e484f79-248b-4da1-a6d5-eef631825f86] Running
	I0801 17:50:05.140387   31913 system_pods.go:74] duration metric: took 182.231588ms to wait for pod list to return data ...
	I0801 17:50:05.140392   31913 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:50:05.338528   31913 default_sa.go:45] found service account: "default"
	I0801 17:50:05.338539   31913 default_sa.go:55] duration metric: took 198.14019ms for default service account to be created ...
	I0801 17:50:05.338544   31913 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 17:50:05.542082   31913 system_pods.go:86] 9 kube-system pods found
	I0801 17:50:05.542095   31913 system_pods.go:89] "coredns-6d4b75cb6d-cvnql" [9614734b-2bd7-4bbf-97b5-634cb4468393] Running
	I0801 17:50:05.542100   31913 system_pods.go:89] "coredns-6d4b75cb6d-z8jfq" [860c344e-4653-4582-ab6e-19ef7308526f] Running
	I0801 17:50:05.542103   31913 system_pods.go:89] "etcd-default-k8s-different-port-20220801174348-13911" [441c7722-6d7f-4385-b0b8-649b3f4ce6f2] Running
	I0801 17:50:05.542107   31913 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [97cf9337-b5ff-477d-b398-366aee9386c6] Running
	I0801 17:50:05.542111   31913 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [a457c03f-b47f-41b4-98f9-c117f334574f] Running
	I0801 17:50:05.542115   31913 system_pods.go:89] "kube-proxy-dvn56" [c67e035f-7889-4442-a7af-6972b0937045] Running
	I0801 17:50:05.542131   31913 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [c3505894-023d-4f91-baaa-6328dac164b8] Running
	I0801 17:50:05.542140   31913 system_pods.go:89] "metrics-server-5c6f97fb75-wzfjd" [43803567-1715-4fb4-9020-c9ac939c5e55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:50:05.542145   31913 system_pods.go:89] "storage-provisioner" [1e484f79-248b-4da1-a6d5-eef631825f86] Running
	I0801 17:50:05.542149   31913 system_pods.go:126] duration metric: took 203.598883ms to wait for k8s-apps to be running ...
	I0801 17:50:05.542158   31913 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 17:50:05.542206   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:50:05.551638   31913 system_svc.go:56] duration metric: took 9.480244ms WaitForService to wait for kubelet.
	I0801 17:50:05.551649   31913 kubeadm.go:572] duration metric: took 6.41698891s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 17:50:05.551663   31913 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:50:05.736899   31913 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:50:05.736912   31913 node_conditions.go:123] node cpu capacity is 6
	I0801 17:50:05.736919   31913 node_conditions.go:105] duration metric: took 185.250207ms to run NodePressure ...
	I0801 17:50:05.736928   31913 start.go:216] waiting for startup goroutines ...
	I0801 17:50:05.767446   31913 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:50:05.791650   31913 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220801174348-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:33:03 UTC, end at Tue 2022-08-02 00:50:46 UTC. --
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.047508449Z" level=info msg="Processing signal 'terminated'"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.048554008Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.049066697Z" level=info msg="Daemon shutdown complete"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.049140956Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: docker.service: Succeeded.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Stopped Docker Application Container Engine.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Starting Docker Application Container Engine...
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.103993889Z" level=info msg="Starting up"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107258175Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107331231Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107364819Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107377776Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108456849Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108470092Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108484226Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108493814Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.111425754Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.115111191Z" level=info msg="Loading containers: start."
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.188779913Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.218225237Z" level=info msg="Loading containers: done."
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.226251934Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.226311143Z" level=info msg="Daemon has completed initialization"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Started Docker Application Container Engine.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.252520264Z" level=info msg="API listen on [::]:2376"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.256100929Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-08-02T00:50:48Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:50:48 up  1:15,  0 users,  load average: 0.99, 0.71, 0.82
	Linux old-k8s-version-20220801172716-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:33:03 UTC, end at Tue 2022-08-02 00:50:48 UTC. --
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24511]: I0802 00:50:47.085944   24511 server.go:410] Version: v1.16.0
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24511]: I0802 00:50:47.086100   24511 plugins.go:100] No cloud provider specified.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24511]: I0802 00:50:47.086110   24511 server.go:773] Client rotation is on, will bootstrap in background
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24511]: I0802 00:50:47.087749   24511 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24511]: W0802 00:50:47.088380   24511 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24511]: W0802 00:50:47.088442   24511 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24511]: F0802 00:50:47.088466   24511 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24524]: I0802 00:50:47.837313   24524 server.go:410] Version: v1.16.0
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24524]: I0802 00:50:47.837653   24524 plugins.go:100] No cloud provider specified.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24524]: I0802 00:50:47.837665   24524 server.go:773] Client rotation is on, will bootstrap in background
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24524]: I0802 00:50:47.839360   24524 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24524]: W0802 00:50:47.842682   24524 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24524]: W0802 00:50:47.843036   24524 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 kubelet[24524]: F0802 00:50:47.843119   24524 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 00:50:47 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 00:50:48 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 930.
	Aug 02 00:50:48 old-k8s-version-20220801172716-13911 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 00:50:48 old-k8s-version-20220801172716-13911 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:50:48.302587   32377 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 2 (442.926114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220801172716-13911" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (43.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220801173626-13911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911: exit status 2 (16.105340204s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911: exit status 2 (16.101882917s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220801173626-13911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220801173626-13911
helpers_test.go:235: (dbg) docker inspect no-preload-20220801173626-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c",
	        "Created": "2022-08-02T00:36:28.462022339Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:37:46.382223185Z",
	            "FinishedAt": "2022-08-02T00:37:44.350232944Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c/hostname",
	        "HostsPath": "/var/lib/docker/containers/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c/hosts",
	        "LogPath": "/var/lib/docker/containers/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c-json.log",
	        "Name": "/no-preload-20220801173626-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220801173626-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220801173626-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4bc2a91037ad8ee229bf7d3a0907a2001651ed7982fa85c577929eba6ddd02a6-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4bc2a91037ad8ee229bf7d3a0907a2001651ed7982fa85c577929eba6ddd02a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4bc2a91037ad8ee229bf7d3a0907a2001651ed7982fa85c577929eba6ddd02a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4bc2a91037ad8ee229bf7d3a0907a2001651ed7982fa85c577929eba6ddd02a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220801173626-13911",
	                "Source": "/var/lib/docker/volumes/no-preload-20220801173626-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220801173626-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220801173626-13911",
	                "name.minikube.sigs.k8s.io": "no-preload-20220801173626-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7448afeb5c2dbc9c26c2b32362de1b7224d710927e15d48b41f8303e6786b40f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51290"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51291"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51292"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51293"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51289"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7448afeb5c2d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220801173626-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "102f6ba38eb4",
	                        "no-preload-20220801173626-13911"
	                    ],
	                    "NetworkID": "363df1b6c81b32b4a7ad3992422335fcbb0b1e69be15a3e6ad5758b34c73d5d3",
	                    "EndpointID": "dedb229046ff2716bfa9a4592b609c9537acfee644a7eff4393fb3778238b1fc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220801173626-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220801173626-13911 logs -n 25: (2.879976594s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                   |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                | kubenet-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:28 PDT | 01 Aug 22 17:28 PDT |
	|         | kubenet-20220801171037-13911                      |                                            |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:29 PDT |
	|         | kubenet-20220801171037-13911                      |                                            |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:31 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                            |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                            |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                            |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                            |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                            |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                            |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220801173625-13911 | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | disable-driver-mounts-20220801173625-13911        |                                            |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:42 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                            |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:37:45
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:37:45.136795   31047 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:37:45.137023   31047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:37:45.137028   31047 out.go:309] Setting ErrFile to fd 2...
	I0801 17:37:45.137032   31047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:37:45.137145   31047 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:37:45.137612   31047 out.go:303] Setting JSON to false
	I0801 17:37:45.152591   31047 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9436,"bootTime":1659391229,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:37:45.152701   31047 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:37:45.174344   31047 out.go:177] * [no-preload-20220801173626-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:37:45.196180   31047 notify.go:193] Checking for updates...
	I0801 17:37:45.217756   31047 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:37:45.238861   31047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:37:45.260039   31047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:37:45.280936   31047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:37:45.302202   31047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:37:45.324757   31047 config.go:180] Loaded profile config "no-preload-20220801173626-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:37:45.325426   31047 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:37:45.394757   31047 docker.go:137] docker version: linux-20.10.17
	I0801 17:37:45.394914   31047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:37:45.527503   31047 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:37:45.457586218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:37:45.571140   31047 out.go:177] * Using the docker driver based on existing profile
	I0801 17:37:45.592082   31047 start.go:284] selected driver: docker
	I0801 17:37:45.592099   31047 start.go:808] validating driver "docker" against &{Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:45.592198   31047 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:37:45.594452   31047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:37:45.733083   31047 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:37:45.664473823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:37:45.733245   31047 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:37:45.733262   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:37:45.733271   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:37:45.733294   31047 start_flags.go:310] config:
	{Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:45.777022   31047 out.go:177] * Starting control plane node no-preload-20220801173626-13911 in cluster no-preload-20220801173626-13911
	I0801 17:37:45.799262   31047 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:37:45.820970   31047 out.go:177] * Pulling base image ...
	I0801 17:37:45.842197   31047 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:37:45.842217   31047 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:37:45.842421   31047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/config.json ...
	I0801 17:37:45.842537   31047 cache.go:107] acquiring lock: {Name:mkce27c207a7bf01881de4cf2e18a8ec061785d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.842574   31047 cache.go:107] acquiring lock: {Name:mk33f064d166c5a0dc9a025cb9d5db4a25dde34f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.843994   31047 cache.go:107] acquiring lock: {Name:mk83ada496db165959cae463687f409b745fe431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844359   31047 cache.go:107] acquiring lock: {Name:mk1a37bbfd8a0fda4175037a2df9b28a8bff25fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844423   31047 cache.go:107] acquiring lock: {Name:mk8f04950ca6b931221e073d61c347db62721cdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844390   31047 cache.go:107] acquiring lock: {Name:mk885468f27c8850bc0b7933d3a2ff478aab774d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844464   31047 cache.go:107] acquiring lock: {Name:mk3407b9bf31dee0ad589c69c26f0a179fd3a6e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844507   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 exists
	I0801 17:37:45.844473   31047 cache.go:107] acquiring lock: {Name:mk8a29c24e1671055af457da8f29bfaf97f492d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.845147   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 exists
	I0801 17:37:45.845108   31047 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3" took 1.980679ms
	I0801 17:37:45.844483   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0801 17:37:45.845289   31047 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 succeeded
	I0801 17:37:45.845305   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 exists
	I0801 17:37:45.845302   31047 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.789096ms
	I0801 17:37:45.845308   31047 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0" took 2.704085ms
	I0801 17:37:45.845327   31047 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0801 17:37:45.845337   31047 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 succeeded
	I0801 17:37:45.845331   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 exists
	I0801 17:37:45.845313   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 exists
	I0801 17:37:45.845364   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 exists
	I0801 17:37:45.845372   31047 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3" took 1.105087ms
	I0801 17:37:45.845382   31047 cache.go:96] cache image "k8s.gcr.io/pause:3.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7" took 1.189591ms
	I0801 17:37:45.845390   31047 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 succeeded
	I0801 17:37:45.845393   31047 cache.go:80] save to tar file k8s.gcr.io/pause:3.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 succeeded
	I0801 17:37:45.845393   31047 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3" took 1.139616ms
	I0801 17:37:45.845347   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0801 17:37:45.845416   31047 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 succeeded
	I0801 17:37:45.845331   31047 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3" took 1.102183ms
	I0801 17:37:45.845430   31047 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 succeeded
	I0801 17:37:45.845426   31047 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 1.373082ms
	I0801 17:37:45.845440   31047 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0801 17:37:45.845462   31047 cache.go:87] Successfully saved all images to host disk.
	I0801 17:37:45.908069   31047 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:37:45.908096   31047 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:37:45.908107   31047 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:37:45.908147   31047 start.go:371] acquiring machines lock for no-preload-20220801173626-13911: {Name:mkda6e117952af39a3874882bbd203241b49719c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.908210   31047 start.go:375] acquired machines lock for "no-preload-20220801173626-13911" in 52.481µs
	I0801 17:37:45.908230   31047 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:37:45.908238   31047 fix.go:55] fixHost starting: 
	I0801 17:37:45.908457   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:37:45.974772   31047 fix.go:103] recreateIfNeeded on no-preload-20220801173626-13911: state=Stopped err=<nil>
	W0801 17:37:45.974798   31047 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:37:45.996880   31047 out.go:177] * Restarting existing docker container for "no-preload-20220801173626-13911" ...
	I0801 17:37:46.018574   31047 cli_runner.go:164] Run: docker start no-preload-20220801173626-13911
	I0801 17:37:46.384675   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:37:46.457749   31047 kic.go:415] container "no-preload-20220801173626-13911" state is running.
	I0801 17:37:46.458352   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:46.531639   31047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/config.json ...
	I0801 17:37:46.532029   31047 machine.go:88] provisioning docker machine ...
	I0801 17:37:46.532061   31047 ubuntu.go:169] provisioning hostname "no-preload-20220801173626-13911"
	I0801 17:37:46.532140   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:46.605057   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:46.605254   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:46.605270   31047 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220801173626-13911 && echo "no-preload-20220801173626-13911" | sudo tee /etc/hostname
	I0801 17:37:46.733056   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220801173626-13911
	
	I0801 17:37:46.733140   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:46.805118   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:46.805272   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:46.805287   31047 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220801173626-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220801173626-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220801173626-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:37:46.917485   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:37:46.917506   31047 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:37:46.917535   31047 ubuntu.go:177] setting up certificates
	I0801 17:37:46.917541   31047 provision.go:83] configureAuth start
	I0801 17:37:46.917615   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:46.990412   31047 provision.go:138] copyHostCerts
	I0801 17:37:46.990491   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:37:46.990502   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:37:46.990596   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:37:46.990798   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:37:46.990808   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:37:46.990864   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:37:46.991000   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:37:46.991007   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:37:46.991062   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:37:46.991772   31047 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220801173626-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220801173626-13911]
	I0801 17:37:47.183740   31047 provision.go:172] copyRemoteCerts
	I0801 17:37:47.183812   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:37:47.183860   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.256107   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:47.339121   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:37:47.356831   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0801 17:37:47.373830   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 17:37:47.392418   31047 provision.go:86] duration metric: configureAuth took 474.857796ms
	I0801 17:37:47.392433   31047 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:37:47.392595   31047 config.go:180] Loaded profile config "no-preload-20220801173626-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:37:47.392663   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.464884   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.465036   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.465047   31047 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:37:47.579712   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:37:47.579729   31047 ubuntu.go:71] root file system type: overlay
	I0801 17:37:47.579870   31047 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:37:47.579944   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.650983   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.651127   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.651186   31047 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:37:47.774346   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:37:47.774436   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.845704   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.845865   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.845879   31047 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:37:47.964006   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:37:47.964021   31047 machine.go:91] provisioned docker machine in 1.43196114s
	I0801 17:37:47.964037   31047 start.go:307] post-start starting for "no-preload-20220801173626-13911" (driver="docker")
	I0801 17:37:47.964043   31047 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:37:47.964117   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:37:47.964170   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.035712   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.118288   31047 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:37:48.121549   31047 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:37:48.121566   31047 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:37:48.121586   31047 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:37:48.121595   31047 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:37:48.121603   31047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:37:48.121710   31047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:37:48.121847   31047 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:37:48.121999   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:37:48.129029   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:37:48.146801   31047 start.go:310] post-start completed in 182.747614ms
	I0801 17:37:48.146864   31047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:37:48.146917   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.217007   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.300445   31047 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:37:48.304748   31047 fix.go:57] fixHost completed within 2.396472477s
	I0801 17:37:48.304758   31047 start.go:82] releasing machines lock for "no-preload-20220801173626-13911", held for 2.39650437s
	I0801 17:37:48.304820   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:48.374117   31047 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:37:48.374143   31047 ssh_runner.go:195] Run: systemctl --version
	I0801 17:37:48.374196   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.374212   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.449727   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.451539   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.719080   31047 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:37:48.729189   31047 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:37:48.729244   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:37:48.740655   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:37:48.753772   31047 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:37:48.824006   31047 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:37:48.896529   31047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:37:48.963357   31047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:37:49.205490   31047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:37:49.268926   31047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:37:49.323147   31047 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:37:49.332627   31047 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:37:49.332704   31047 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:37:49.336848   31047 start.go:471] Will wait 60s for crictl version
	I0801 17:37:49.336901   31047 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:37:49.441376   31047 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:37:49.441442   31047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:37:49.478518   31047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:37:49.557572   31047 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:37:49.557790   31047 cli_runner.go:164] Run: docker exec -t no-preload-20220801173626-13911 dig +short host.docker.internal
	I0801 17:37:49.686230   31047 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:37:49.686336   31047 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:37:49.690942   31047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:37:49.700964   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:49.771329   31047 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:37:49.771383   31047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:37:49.802366   31047 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:37:49.802385   31047 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:37:49.802458   31047 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:37:49.879052   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:37:49.879064   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:37:49.879080   31047 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:37:49.879096   31047 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220801173626-13911 NodeName:no-preload-20220801173626-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:37:49.879194   31047 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "no-preload-20220801173626-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:37:49.879290   31047 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=no-preload-20220801173626-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:37:49.879351   31047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:37:49.887424   31047 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:37:49.887487   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:37:49.894755   31047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (493 bytes)
	I0801 17:37:49.908266   31047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:37:49.920870   31047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0801 17:37:49.933830   31047 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:37:49.937511   31047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:37:49.946559   31047 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911 for IP: 192.168.67.2
	I0801 17:37:49.946659   31047 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:37:49.946707   31047 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:37:49.946786   31047 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.key
	I0801 17:37:49.946845   31047 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.key.c7fa3a9e
	I0801 17:37:49.946897   31047 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.key
	I0801 17:37:49.947100   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:37:49.947138   31047 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:37:49.947151   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:37:49.947189   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:37:49.947218   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:37:49.947250   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:37:49.947309   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:37:49.947829   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:37:49.964521   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 17:37:49.981144   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:37:49.997236   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 17:37:50.014091   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:37:50.030809   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:37:50.047089   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:37:50.063912   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:37:50.082297   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:37:50.101186   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:37:50.118882   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:37:50.136291   31047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:37:50.149676   31047 ssh_runner.go:195] Run: openssl version
	I0801 17:37:50.163581   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:37:50.171105   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.174935   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.174989   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.179840   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:37:50.186763   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:37:50.194343   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.198345   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.198395   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.203934   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:37:50.210838   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:37:50.218583   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.222458   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.222498   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.227505   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:37:50.234458   31047 kubeadm.go:395] StartCluster: {Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:50.234558   31047 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:37:50.264051   31047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:37:50.271634   31047 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:37:50.271652   31047 kubeadm.go:626] restartCluster start
	I0801 17:37:50.271694   31047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:37:50.278298   31047 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.278364   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:50.349453   31047 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220801173626-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:37:50.349640   31047 kubeconfig.go:127] "no-preload-20220801173626-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:37:50.349966   31047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:37:50.351119   31047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:37:50.358739   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.358794   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.366952   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.567082   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.567203   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.576999   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.769130   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.769340   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.779725   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.969182   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.969292   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.979800   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.167920   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.168015   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.178836   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.367096   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.367205   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.376391   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.569038   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.569130   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.578185   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.769147   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.769333   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.779768   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.967690   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.967807   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.978203   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.168126   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.168251   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.178788   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.367362   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.367477   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.376348   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.569124   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.569313   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.579843   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.767372   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.767476   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.776970   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.968285   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.968420   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.978224   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.168014   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.168103   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.178218   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.369185   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.369348   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.380616   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.380627   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.380671   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.388701   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.388714   31047 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:37:53.388723   31047 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:37:53.388774   31047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:37:53.420707   31047 docker.go:443] Stopping containers: [d5a3d4ccde35 795a7dfc5c0b 9c6c1ed81713 1d852044111d 2f0cbdfcc618 803f6a6ae70d 41e8b95b80bc b8daaea5d97c b53b375d313f be3fbf75c305 482dbbf122e4 5abcdb77ef04 302f547a73d8 5c08de9ffe04 daf4df3d9163 4dd96b3aa0d4]
	I0801 17:37:53.420777   31047 ssh_runner.go:195] Run: docker stop d5a3d4ccde35 795a7dfc5c0b 9c6c1ed81713 1d852044111d 2f0cbdfcc618 803f6a6ae70d 41e8b95b80bc b8daaea5d97c b53b375d313f be3fbf75c305 482dbbf122e4 5abcdb77ef04 302f547a73d8 5c08de9ffe04 daf4df3d9163 4dd96b3aa0d4
	I0801 17:37:53.452120   31047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:37:53.462361   31047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:37:53.469872   31047 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug  2 00:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:36 /etc/kubernetes/scheduler.conf
	
	I0801 17:37:53.469922   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:37:53.477025   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:37:53.483955   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:37:53.490967   31047 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.491012   31047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:37:53.497749   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:37:53.504618   31047 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.504666   31047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:37:53.511317   31047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:37:53.518669   31047 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:37:53.518679   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:53.563806   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.484230   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.652440   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.710862   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.763698   31047 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:37:54.763766   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.273497   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.775502   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.820137   31047 api_server.go:71] duration metric: took 1.056421863s to wait for apiserver process to appear ...
	I0801 17:37:55.820154   31047 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:37:55.820168   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:55.821591   31047 api_server.go:256] stopped: https://127.0.0.1:51289/healthz: Get "https://127.0.0.1:51289/healthz": EOF
	I0801 17:37:56.322368   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.001585   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:37:59.001601   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:37:59.323815   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.331879   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:37:59.331896   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:37:59.821943   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.827351   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:37:59.827368   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:38:00.324020   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:38:00.331405   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 200:
	ok
	I0801 17:38:00.337668   31047 api_server.go:140] control plane version: v1.24.3
	I0801 17:38:00.337681   31047 api_server.go:130] duration metric: took 4.517452084s to wait for apiserver health ...
	I0801 17:38:00.337687   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:38:00.337692   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:38:00.337703   31047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:38:00.344812   31047 system_pods.go:59] 8 kube-system pods found
	I0801 17:38:00.344828   31047 system_pods.go:61] "coredns-6d4b75cb6d-qb7sz" [77b59710-ca1b-4065-bf3b-ee7a85c78408] Running
	I0801 17:38:00.344836   31047 system_pods.go:61] "etcd-no-preload-20220801173626-13911" [e7d936e6-08ca-4c1d-99af-689effe61062] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0801 17:38:00.344843   31047 system_pods.go:61] "kube-apiserver-no-preload-20220801173626-13911" [4e6c4e55-cc13-472a-afbe-59a6a2ec20ad] Running
	I0801 17:38:00.344847   31047 system_pods.go:61] "kube-controller-manager-no-preload-20220801173626-13911" [28fbab73-82d5-4181-8471-d287ef555c41] Running
	I0801 17:38:00.344851   31047 system_pods.go:61] "kube-proxy-2spmx" [34f279f3-ae86-4a39-92bc-978b6b6c44fd] Running
	I0801 17:38:00.344855   31047 system_pods.go:61] "kube-scheduler-no-preload-20220801173626-13911" [8b3b67a0-1d6a-454c-85e1-c104c7bff40e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:38:00.344862   31047 system_pods.go:61] "metrics-server-5c6f97fb75-wrh2c" [9d42bee2-4bb9-4237-8444-831f4c65f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:38:00.344866   31047 system_pods.go:61] "storage-provisioner" [dd76b63a-5481-4315-bfbb-d56bd50aef64] Running
	I0801 17:38:00.344870   31047 system_pods.go:74] duration metric: took 7.163598ms to wait for pod list to return data ...
	I0801 17:38:00.344876   31047 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:38:00.347456   31047 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:38:00.347468   31047 node_conditions.go:123] node cpu capacity is 6
	I0801 17:38:00.347477   31047 node_conditions.go:105] duration metric: took 2.59659ms to run NodePressure ...
	I0801 17:38:00.347486   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:38:00.471283   31047 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0801 17:38:00.475832   31047 kubeadm.go:777] kubelet initialised
	I0801 17:38:00.475844   31047 kubeadm.go:778] duration metric: took 4.548844ms waiting for restarted kubelet to initialise ...
	I0801 17:38:00.475851   31047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:38:00.481039   31047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:00.486739   31047 pod_ready.go:92] pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:00.486750   31047 pod_ready.go:81] duration metric: took 5.697955ms waiting for pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:00.486762   31047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:02.500418   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:05.000962   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:07.001386   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:08.499575   31047 pod_ready.go:92] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:08.499589   31047 pod_ready.go:81] duration metric: took 8.012693599s waiting for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:08.499595   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:10.513107   31047 pod_ready.go:102] pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:12.510113   31047 pod_ready.go:92] pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:12.510126   31047 pod_ready.go:81] duration metric: took 4.010464323s waiting for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:12.510132   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.022615   31047 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.022629   31047 pod_ready.go:81] duration metric: took 1.512455198s waiting for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.022635   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2spmx" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.026883   31047 pod_ready.go:92] pod "kube-proxy-2spmx" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.026894   31047 pod_ready.go:81] duration metric: took 4.246546ms waiting for pod "kube-proxy-2spmx" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.026900   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.030969   31047 pod_ready.go:92] pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.030977   31047 pod_ready.go:81] duration metric: took 4.07323ms waiting for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.030983   31047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:16.041234   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:18.041647   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:20.542837   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:23.041487   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:25.043560   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:27.540915   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:29.543086   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:32.042479   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:34.544640   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:37.044506   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:39.541915   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:41.544271   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:44.041420   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:46.042431   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:48.044498   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:50.543837   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:53.041176   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:55.044380   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:57.541598   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:59.545044   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:02.042789   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:04.044739   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:06.541143   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:08.542691   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	W0801 17:39:10.045604   30307 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 17:39:10.045633   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 17:39:10.468055   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:39:10.477578   30307 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:39:10.477629   30307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:39:10.485644   30307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:39:10.485666   30307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:39:11.219133   30307 out.go:204]   - Generating certificates and keys ...
	I0801 17:39:11.823639   30307 out.go:204]   - Booting up control plane ...
	I0801 17:39:11.042720   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:13.042943   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:15.043258   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:17.043865   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:19.542284   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:21.544438   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:24.040750   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:26.042378   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:28.544524   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:31.041513   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:33.042620   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:35.043058   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:37.543424   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:40.043048   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:42.044820   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:44.541659   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:46.544518   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:49.044133   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:51.543084   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:54.045047   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:56.542567   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:58.545088   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:01.043406   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:03.044252   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:05.542151   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:07.543499   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:09.544345   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:12.045195   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:14.542899   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:16.544629   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:18.545930   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:21.044674   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:23.045379   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:25.545385   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:27.545491   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:30.042095   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:32.043488   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:34.548393   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:37.043300   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:39.546662   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:42.044663   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:44.544152   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:46.545544   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:49.042550   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:51.044633   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:53.542274   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:55.543494   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:58.043271   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:00.043457   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:02.043870   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:04.044318   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:06.739199   30307 kubeadm.go:397] StartCluster complete in 7m59.637942115s
	I0801 17:41:06.739275   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:41:06.768243   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.768256   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:41:06.768314   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:41:06.798174   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.798186   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:41:06.798242   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:41:06.827196   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.827207   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:41:06.827266   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:41:06.857151   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.857164   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:41:06.857221   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:41:06.886482   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.886494   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:41:06.886551   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:41:06.915571   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.915583   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:41:06.915642   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:41:06.946187   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.946200   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:41:06.946261   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:41:06.976305   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.976317   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:41:06.976324   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:41:06.976330   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:41:09.033371   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056995262s)
	I0801 17:41:09.033517   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:41:09.033529   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:41:09.074454   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:41:09.074467   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:41:09.086365   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:41:09.086383   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:41:09.139109   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:41:09.139121   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:41:09.139129   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0801 17:41:09.152961   30307 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 17:41:09.152979   30307 out.go:239] * 
	W0801 17:41:09.153075   30307 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:41:09.153105   30307 out.go:239] * 
	W0801 17:41:09.153626   30307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 17:41:09.216113   30307 out.go:177] 
	W0801 17:41:09.258477   30307 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:41:09.258605   30307 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 17:41:09.258689   30307 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 17:41:09.300266   30307 out.go:177] 
	I0801 17:41:06.045067   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:08.046647   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:10.542246   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:12.544603   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:15.044082   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:17.045083   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:19.045469   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:21.546117   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:24.043650   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:26.044945   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:28.546922   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:31.044492   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:33.044542   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:35.546423   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:38.043601   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:40.044277   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:42.045851   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:44.548121   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:47.045194   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:49.045814   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:51.546325   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:53.546533   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:56.045181   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:58.546908   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:00.547797   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:03.046647   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:05.050012   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:07.547212   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:10.046877   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:12.547856   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:14.038717   31047 pod_ready.go:81] duration metric: took 4m0.003937501s waiting for pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace to be "Ready" ...
	E0801 17:42:14.038739   31047 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0801 17:42:14.038757   31047 pod_ready.go:38] duration metric: took 4m13.558984148s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:42:14.038793   31047 kubeadm.go:630] restartCluster took 4m23.763066112s
	W0801 17:42:14.038925   31047 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0801 17:42:14.038954   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0801 17:42:16.390778   31047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.351772731s)
	I0801 17:42:16.390841   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:42:16.400528   31047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:42:16.408180   31047 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:42:16.408221   31047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:42:16.415600   31047 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:42:16.415627   31047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:42:16.699655   31047 out.go:204]   - Generating certificates and keys ...
	I0801 17:42:17.913958   31047 out.go:204]   - Booting up control plane ...
	I0801 17:42:24.461033   31047 out.go:204]   - Configuring RBAC rules ...
	I0801 17:42:24.836074   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:42:24.836086   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:42:24.836103   31047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:42:24.836174   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:24.836186   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=no-preload-20220801173626-13911 minikube.k8s.io/updated_at=2022_08_01T17_42_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:24.981309   31047 ops.go:34] apiserver oom_adj: -16
	I0801 17:42:24.981327   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:25.553761   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:26.053271   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:26.553560   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:27.053994   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:27.555176   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:28.054889   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:28.554418   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:29.053832   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:29.553756   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:30.053329   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:30.555289   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:31.053353   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:31.555295   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:32.053348   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:32.553293   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:33.053654   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:33.555103   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:34.054338   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:34.553949   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:35.053541   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:35.553481   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:36.053315   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:36.553930   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:37.054662   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:37.555373   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:38.054000   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:38.553367   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:38.617569   31047 kubeadm.go:1045] duration metric: took 13.781234788s to wait for elevateKubeSystemPrivileges.
	I0801 17:42:38.617584   31047 kubeadm.go:397] StartCluster complete in 4m48.37868331s
	I0801 17:42:38.617608   31047 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:42:38.617699   31047 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:42:38.618272   31047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:42:39.133513   31047 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220801173626-13911" rescaled to 1
	I0801 17:42:39.133558   31047 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:42:39.133567   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:42:39.133607   31047 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:42:39.133809   31047 config.go:180] Loaded profile config "no-preload-20220801173626-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:42:39.194376   31047 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220801173626-13911"
	I0801 17:42:39.194376   31047 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220801173626-13911"
	I0801 17:42:39.194385   31047 addons.go:65] Setting dashboard=true in profile "no-preload-20220801173626-13911"
	I0801 17:42:39.194282   31047 out.go:177] * Verifying Kubernetes components...
	I0801 17:42:39.194399   31047 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220801173626-13911"
	I0801 17:42:39.194395   31047 addons.go:65] Setting metrics-server=true in profile "no-preload-20220801173626-13911"
	I0801 17:42:39.231626   31047 addons.go:153] Setting addon metrics-server=true in "no-preload-20220801173626-13911"
	W0801 17:42:39.194408   31047 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:42:39.231645   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0801 17:42:39.231649   31047 addons.go:162] addon metrics-server should already be in state true
	I0801 17:42:39.194411   31047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220801173626-13911"
	I0801 17:42:39.231723   31047 host.go:66] Checking if "no-preload-20220801173626-13911" exists ...
	I0801 17:42:39.231721   31047 host.go:66] Checking if "no-preload-20220801173626-13911" exists ...
	I0801 17:42:39.194424   31047 addons.go:153] Setting addon dashboard=true in "no-preload-20220801173626-13911"
	W0801 17:42:39.231797   31047 addons.go:162] addon dashboard should already be in state true
	I0801 17:42:39.195531   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 17:42:39.231863   31047 host.go:66] Checking if "no-preload-20220801173626-13911" exists ...
	I0801 17:42:39.232481   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.232531   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.232583   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.232790   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.255171   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.383513   31047 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 17:42:39.403742   31047 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:42:39.440375   31047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:42:39.410630   31047 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220801173626-13911"
	I0801 17:42:39.440383   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:42:39.458716   31047 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220801173626-13911" to be "Ready" ...
	I0801 17:42:39.477386   31047 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:42:39.514349   31047 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0801 17:42:39.514354   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0801 17:42:39.514369   31047 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:42:39.514430   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.535320   31047 host.go:66] Checking if "no-preload-20220801173626-13911" exists ...
	I0801 17:42:39.514432   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.517175   31047 node_ready.go:49] node "no-preload-20220801173626-13911" has status "Ready":"True"
	I0801 17:42:39.535385   31047 node_ready.go:38] duration metric: took 20.994571ms waiting for node "no-preload-20220801173626-13911" to be "Ready" ...
	I0801 17:42:39.556673   31047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:42:39.538335   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.556730   31047 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:42:39.567415   31047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-2nn4d" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:39.594427   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:42:39.594445   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:42:39.594521   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.622346   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:42:39.627626   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:42:39.645260   31047 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:42:39.645275   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:42:39.645335   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.682223   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:42:39.729412   31047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:42:39.732366   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:42:39.736209   31047 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:42:39.736225   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:42:39.809142   31047 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:42:39.809166   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:42:39.823414   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:42:39.823426   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:42:39.827731   31047 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:42:39.827742   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:42:39.841535   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:42:39.841548   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:42:39.848769   31047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:42:39.919527   31047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:42:39.919701   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:42:39.919714   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:42:39.935283   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:42:39.935295   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:42:39.954641   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:42:39.954657   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:42:40.030284   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:42:40.030308   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:42:40.117562   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:42:40.117580   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:42:40.138635   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:42:40.138655   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:42:40.225262   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:42:40.225281   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:42:40.310238   31047 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.078352698s)
	I0801 17:42:40.310268   31047 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0801 17:42:40.324674   31047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:42:40.509435   31047 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220801173626-13911"
	I0801 17:42:40.610125   31047 pod_ready.go:92] pod "coredns-6d4b75cb6d-2nn4d" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:40.610138   31047 pod_ready.go:81] duration metric: took 1.015766318s waiting for pod "coredns-6d4b75cb6d-2nn4d" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:40.610146   31047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-flh6s" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:41.180477   31047 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:42:41.253292   31047 addons.go:414] enableAddons completed in 2.119666964s
	I0801 17:42:42.621261   31047 pod_ready.go:92] pod "coredns-6d4b75cb6d-flh6s" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.621274   31047 pod_ready.go:81] duration metric: took 2.01109299s waiting for pod "coredns-6d4b75cb6d-flh6s" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.621280   31047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.626750   31047 pod_ready.go:92] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.626758   31047 pod_ready.go:81] duration metric: took 5.464362ms waiting for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.626764   31047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.631106   31047 pod_ready.go:92] pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.631114   31047 pod_ready.go:81] duration metric: took 4.345723ms waiting for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.631119   31047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.635093   31047 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.635100   31047 pod_ready.go:81] duration metric: took 3.976528ms waiting for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.635106   31047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gpjj" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.719971   31047 pod_ready.go:92] pod "kube-proxy-8gpjj" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.719986   31047 pod_ready.go:81] duration metric: took 84.874465ms waiting for pod "kube-proxy-8gpjj" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.719994   31047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:43.119014   31047 pod_ready.go:92] pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:43.119025   31047 pod_ready.go:81] duration metric: took 399.010321ms waiting for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:43.119030   31047 pod_ready.go:38] duration metric: took 3.562189656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:42:43.119042   31047 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:42:43.119083   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:42:43.131288   31047 api_server.go:71] duration metric: took 3.997644423s to wait for apiserver process to appear ...
	I0801 17:42:43.131304   31047 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:42:43.131313   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:42:43.136512   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 200:
	ok
	I0801 17:42:43.137662   31047 api_server.go:140] control plane version: v1.24.3
	I0801 17:42:43.137671   31047 api_server.go:130] duration metric: took 6.363278ms to wait for apiserver health ...
	I0801 17:42:43.137676   31047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:42:43.320862   31047 system_pods.go:59] 9 kube-system pods found
	I0801 17:42:43.320876   31047 system_pods.go:61] "coredns-6d4b75cb6d-2nn4d" [d263f6f8-04c0-4226-8cd8-34ac2f30b95e] Running
	I0801 17:42:43.320880   31047 system_pods.go:61] "coredns-6d4b75cb6d-flh6s" [407a23dd-cab9-4929-a0e6-d71acc8c10d6] Running
	I0801 17:42:43.320883   31047 system_pods.go:61] "etcd-no-preload-20220801173626-13911" [aa8583d7-65f3-4c5b-adb8-42f87101c146] Running
	I0801 17:42:43.320886   31047 system_pods.go:61] "kube-apiserver-no-preload-20220801173626-13911" [28218c4f-a48c-4d52-9468-1b2e099be70e] Running
	I0801 17:42:43.320890   31047 system_pods.go:61] "kube-controller-manager-no-preload-20220801173626-13911" [38acf19f-e4cd-4018-88ec-8aeedb05a86c] Running
	I0801 17:42:43.320894   31047 system_pods.go:61] "kube-proxy-8gpjj" [24c63150-6434-42c5-abeb-967bd7e0a8b7] Running
	I0801 17:42:43.320898   31047 system_pods.go:61] "kube-scheduler-no-preload-20220801173626-13911" [1e9907bf-bfae-424d-90f1-2bbf4546559c] Running
	I0801 17:42:43.320904   31047 system_pods.go:61] "metrics-server-5c6f97fb75-72ccc" [cde81437-6354-4b6e-97b2-71da55220f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:42:43.320909   31047 system_pods.go:61] "storage-provisioner" [49990de8-bf79-4a9d-99dd-91cddb6b9f68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:42:43.320913   31047 system_pods.go:74] duration metric: took 183.231196ms to wait for pod list to return data ...
	I0801 17:42:43.320918   31047 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:42:43.518513   31047 default_sa.go:45] found service account: "default"
	I0801 17:42:43.518527   31047 default_sa.go:55] duration metric: took 197.601836ms for default service account to be created ...
	I0801 17:42:43.518534   31047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 17:42:43.721993   31047 system_pods.go:86] 9 kube-system pods found
	I0801 17:42:43.722006   31047 system_pods.go:89] "coredns-6d4b75cb6d-2nn4d" [d263f6f8-04c0-4226-8cd8-34ac2f30b95e] Running
	I0801 17:42:43.722011   31047 system_pods.go:89] "coredns-6d4b75cb6d-flh6s" [407a23dd-cab9-4929-a0e6-d71acc8c10d6] Running
	I0801 17:42:43.722014   31047 system_pods.go:89] "etcd-no-preload-20220801173626-13911" [aa8583d7-65f3-4c5b-adb8-42f87101c146] Running
	I0801 17:42:43.722020   31047 system_pods.go:89] "kube-apiserver-no-preload-20220801173626-13911" [28218c4f-a48c-4d52-9468-1b2e099be70e] Running
	I0801 17:42:43.722024   31047 system_pods.go:89] "kube-controller-manager-no-preload-20220801173626-13911" [38acf19f-e4cd-4018-88ec-8aeedb05a86c] Running
	I0801 17:42:43.722041   31047 system_pods.go:89] "kube-proxy-8gpjj" [24c63150-6434-42c5-abeb-967bd7e0a8b7] Running
	I0801 17:42:43.722048   31047 system_pods.go:89] "kube-scheduler-no-preload-20220801173626-13911" [1e9907bf-bfae-424d-90f1-2bbf4546559c] Running
	I0801 17:42:43.722055   31047 system_pods.go:89] "metrics-server-5c6f97fb75-72ccc" [cde81437-6354-4b6e-97b2-71da55220f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:42:43.722061   31047 system_pods.go:89] "storage-provisioner" [49990de8-bf79-4a9d-99dd-91cddb6b9f68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:42:43.722066   31047 system_pods.go:126] duration metric: took 203.525763ms to wait for k8s-apps to be running ...
	I0801 17:42:43.722071   31047 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 17:42:43.722121   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:42:43.731987   31047 system_svc.go:56] duration metric: took 9.909058ms WaitForService to wait for kubelet.
	I0801 17:42:43.732002   31047 kubeadm.go:572] duration metric: took 4.598353134s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 17:42:43.732016   31047 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:42:43.918004   31047 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:42:43.918015   31047 node_conditions.go:123] node cpu capacity is 6
	I0801 17:42:43.918022   31047 node_conditions.go:105] duration metric: took 185.999582ms to run NodePressure ...
	I0801 17:42:43.918030   31047 start.go:216] waiting for startup goroutines ...
	I0801 17:42:43.947785   31047 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:42:43.971474   31047 out.go:177] * Done! kubectl is now configured to use "no-preload-20220801173626-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:37:46 UTC, end at Tue 2022-08-02 00:43:37 UTC. --
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.158523276Z" level=info msg="ignoring event" container=210e689512306230bfc35c62cdfbea892c61a07597f5b9a4f4a89d13c20cbb13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.227827529Z" level=info msg="ignoring event" container=6f304e0eca7d748a31f25b1a8569525a29fd250afa894aac84bf8e465738fd37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.299068914Z" level=info msg="ignoring event" container=aa1a0ca5a473eb28bcd812851cf38c07460e139b460f3d3a461bd512765ef817 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.410787814Z" level=info msg="ignoring event" container=7d9e7719f95b751e52f311973776f26e386e86f3e995d180626c88c2ff62fec9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.479119210Z" level=info msg="ignoring event" container=5f322afed57b554a56eb9629c7b882a66372b8488c785d608726cefb04b1cabf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.551283355Z" level=info msg="ignoring event" container=1565a682f8d571eb0fdb9122caedf952e3512a46a8f1868cb4c221144d1c4773 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.621890660Z" level=info msg="ignoring event" container=3cda2e01c5e7d18165e3697ad47d1fadf4dca4c9a0f56d763cdbe3357d185b20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.688767638Z" level=info msg="ignoring event" container=5b2a3d9c587aa6ffb8011312cc17ed4f982453b89fb97af0c2062a3259696005 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.757013939Z" level=info msg="ignoring event" container=97cf250f960bc11a02007bc871a7ea1423a7b2c88ae2edcddf79cc04f1580400 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.839932509Z" level=info msg="ignoring event" container=90b26d903a5670de69e207d3d6af0a47eb9deca4d893597a2736072d0d0597dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.912026380Z" level=info msg="ignoring event" container=20313b3bb3ecc6abb742caab6502c6265708db21ee17d5f7f5d71fea7c30b406 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:16 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:16.069021866Z" level=info msg="ignoring event" container=1ce596db9881edf6ae7cdf7353bb48e4f85765e5b83d21521a15074829e0bcd7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:41 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:41.615507009Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:41 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:41.615587493Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:41 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:41.616730287Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:42 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:42.497805519Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Aug 02 00:42:45 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:45.254486611Z" level=info msg="ignoring event" container=0e4761b6e3df9ac1035db0952bf76f264f704a88b1c2b4108d43421c35a51e1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:45 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:45.343377570Z" level=info msg="ignoring event" container=b6b3f79cecce51d4c63a43d069266fa357f67300c53c55d92d8b364c023cc565 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:49 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:49.020610168Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:42:49 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:49.317177820Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:42:52 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:52.755468097Z" level=info msg="ignoring event" container=23139eb9793cf3cd28da92b5350d701293390675f91d5dfcd263e29f855047c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:53 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:53.487012230Z" level=info msg="ignoring event" container=be43a37b01ad84b754474a326c889faa74aa5182811427897a0a88adcddda715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:53 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:53.855282432Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:53 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:53.855324514Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:53 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:53.856689403Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	be43a37b01ad8       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   32de5be36ea76
	02205af17d5ad       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   50 seconds ago       Running             kubernetes-dashboard        0                   7282bf38a77f6
	920e63be68c91       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   e575763a688ef
	e7d06fe14e5ed       a4ca41631cc7a                                                                                    59 seconds ago       Running             coredns                     0                   eb2c1473ce9f8
	aae668aee0c10       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   368789abfa3d0
	57cd3a4eda12e       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   64a5b6d6e9f16
	e37f8ad07936c       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   425c2334c140a
	a16779ed9225f       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   c358be1684e8c
	45c9b744a52c7       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   aa22035d7e8a4
	
	* 
	* ==> coredns [e7d06fe14e5e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220801173626-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220801173626-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=no-preload-20220801173626-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_42_24_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:42:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220801173626-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:43:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:43:35 +0000   Tue, 02 Aug 2022 00:42:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:43:35 +0000   Tue, 02 Aug 2022 00:42:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:43:35 +0000   Tue, 02 Aug 2022 00:42:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Aug 2022 00:43:35 +0000   Tue, 02 Aug 2022 00:43:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220801173626-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                c6e3aa28-a480-4c1e-a554-33bdfd25fbc9
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-flh6s                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-no-preload-20220801173626-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-no-preload-20220801173626-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-no-preload-20220801173626-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-8gpjj                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-no-preload-20220801173626-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 metrics-server-5c6f97fb75-72ccc                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-mqzm9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-j8vz8                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 59s   kube-proxy       
	  Normal  Starting                 74s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  74s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s   kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasSufficientPID
	  Normal  NodeReady                74s   kubelet          Node no-preload-20220801173626-13911 status is now: NodeReady
	  Normal  RegisteredNode           61s   node-controller  Node no-preload-20220801173626-13911 event: Registered Node no-preload-20220801173626-13911 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeNotReady
	  Normal  NodeReady                3s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [57cd3a4eda12] <==
	* {"level":"info","ts":"2022-08-02T00:42:19.358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-08-02T00:42:19.358Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-08-02T00:42:19.360Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:42:19.362Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:42:19.362Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:42:19.362Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:42:19.362Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:42:19.661Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220801173626-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:42:19.661Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:42:19.662Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:42:19.662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:43:38 up  1:08,  0 users,  load average: 0.50, 0.58, 0.86
	Linux no-preload-20220801173626-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a16779ed9225] <==
	* I0802 00:42:22.801295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 00:42:23.060224       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 00:42:23.085173       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 00:42:23.160982       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0802 00:42:23.164425       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0802 00:42:23.165095       1 controller.go:611] quota admission added evaluator for: endpoints
	I0802 00:42:23.167943       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 00:42:23.955220       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:42:24.664021       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:42:24.669526       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0802 00:42:24.676743       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:42:24.767664       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:42:38.017998       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0802 00:42:38.115879       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0802 00:42:38.727186       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:42:40.444991       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.106.110.242]
	I0802 00:42:41.119159       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.97.206.177]
	I0802 00:42:41.132584       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.165.0]
	W0802 00:42:41.328752       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:42:41.328828       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:42:41.328836       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:42:41.328883       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:42:41.328921       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:42:41.330429       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [45c9b744a52c] <==
	* I0802 00:42:38.373808       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-flh6s"
	I0802 00:42:38.635752       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0802 00:42:38.638585       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-2nn4d"
	I0802 00:42:40.318074       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0802 00:42:40.322506       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0802 00:42:40.333306       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0802 00:42:40.341220       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-72ccc"
	I0802 00:42:40.968128       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0802 00:42:40.973909       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:42:40.975446       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0802 00:42:40.977539       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:40.977753       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:42:40.980671       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:40.980734       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:42:40.982417       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:42:41.012520       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:41.012613       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:42:41.014333       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:41.014436       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:42:41.018478       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:41.018510       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:42:41.029020       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-j8vz8"
	I0802 00:42:41.038689       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-mqzm9"
	E0802 00:43:35.316822       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0802 00:43:35.389115       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [aae668aee0c1] <==
	* I0802 00:42:38.684222       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:42:38.684397       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:42:38.684420       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:42:38.722833       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:42:38.723091       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:42:38.723121       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:42:38.723134       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:42:38.723265       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:42:38.724548       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:42:38.724687       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:42:38.724693       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:42:38.725251       1 config.go:444] "Starting node config controller"
	I0802 00:42:38.725260       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:42:38.725279       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:42:38.725283       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:42:38.725297       1 config.go:317] "Starting service config controller"
	I0802 00:42:38.725302       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:42:38.825877       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:42:38.825942       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 00:42:38.825994       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e37f8ad07936] <==
	* W0802 00:42:21.859061       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 00:42:21.859069       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 00:42:21.859125       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 00:42:21.859190       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 00:42:21.859620       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:42:21.859683       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:42:21.859989       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 00:42:21.860011       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 00:42:21.860192       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 00:42:21.860226       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 00:42:21.860183       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 00:42:21.860309       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 00:42:22.730936       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 00:42:22.730984       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 00:42:22.764306       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 00:42:22.764372       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 00:42:22.795350       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 00:42:22.795443       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 00:42:22.813104       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 00:42:22.813231       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 00:42:22.919962       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:42:22.920050       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 00:42:22.935072       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 00:42:22.935179       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0802 00:42:23.454947       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:37:46 UTC, end at Tue 2022-08-02 00:43:39 UTC. --
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.838919    9861 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.838947    9861 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.839024    9861 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.839168    9861 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863492    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlcnv\" (UniqueName: \"kubernetes.io/projected/cde81437-6354-4b6e-97b2-71da55220f7d-kube-api-access-vlcnv\") pod \"metrics-server-5c6f97fb75-72ccc\" (UID: \"cde81437-6354-4b6e-97b2-71da55220f7d\") " pod="kube-system/metrics-server-5c6f97fb75-72ccc"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863553    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjs6\" (UniqueName: \"kubernetes.io/projected/fe43012f-2cca-414d-a62c-2a7a59aa5517-kube-api-access-4fjs6\") pod \"dashboard-metrics-scraper-dffd48c4c-mqzm9\" (UID: \"fe43012f-2cca-414d-a62c-2a7a59aa5517\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-mqzm9"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863739    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnprj\" (UniqueName: \"kubernetes.io/projected/24c63150-6434-42c5-abeb-967bd7e0a8b7-kube-api-access-mnprj\") pod \"kube-proxy-8gpjj\" (UID: \"24c63150-6434-42c5-abeb-967bd7e0a8b7\") " pod="kube-system/kube-proxy-8gpjj"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863782    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cde81437-6354-4b6e-97b2-71da55220f7d-tmp-dir\") pod \"metrics-server-5c6f97fb75-72ccc\" (UID: \"cde81437-6354-4b6e-97b2-71da55220f7d\") " pod="kube-system/metrics-server-5c6f97fb75-72ccc"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863804    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24c63150-6434-42c5-abeb-967bd7e0a8b7-kube-proxy\") pod \"kube-proxy-8gpjj\" (UID: \"24c63150-6434-42c5-abeb-967bd7e0a8b7\") " pod="kube-system/kube-proxy-8gpjj"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863880    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24c63150-6434-42c5-abeb-967bd7e0a8b7-lib-modules\") pod \"kube-proxy-8gpjj\" (UID: \"24c63150-6434-42c5-abeb-967bd7e0a8b7\") " pod="kube-system/kube-proxy-8gpjj"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863922    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gpn4\" (UniqueName: \"kubernetes.io/projected/49990de8-bf79-4a9d-99dd-91cddb6b9f68-kube-api-access-6gpn4\") pod \"storage-provisioner\" (UID: \"49990de8-bf79-4a9d-99dd-91cddb6b9f68\") " pod="kube-system/storage-provisioner"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863938    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5d36ef6e-3081-4a75-a775-d906fc182113-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-j8vz8\" (UID: \"5d36ef6e-3081-4a75-a775-d906fc182113\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-j8vz8"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864008    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t77q\" (UniqueName: \"kubernetes.io/projected/5d36ef6e-3081-4a75-a775-d906fc182113-kube-api-access-9t77q\") pod \"kubernetes-dashboard-5fd5574d9f-j8vz8\" (UID: \"5d36ef6e-3081-4a75-a775-d906fc182113\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-j8vz8"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864050    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/49990de8-bf79-4a9d-99dd-91cddb6b9f68-tmp\") pod \"storage-provisioner\" (UID: \"49990de8-bf79-4a9d-99dd-91cddb6b9f68\") " pod="kube-system/storage-provisioner"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864065    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24c63150-6434-42c5-abeb-967bd7e0a8b7-xtables-lock\") pod \"kube-proxy-8gpjj\" (UID: \"24c63150-6434-42c5-abeb-967bd7e0a8b7\") " pod="kube-system/kube-proxy-8gpjj"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864142    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/407a23dd-cab9-4929-a0e6-d71acc8c10d6-config-volume\") pod \"coredns-6d4b75cb6d-flh6s\" (UID: \"407a23dd-cab9-4929-a0e6-d71acc8c10d6\") " pod="kube-system/coredns-6d4b75cb6d-flh6s"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864212    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fe43012f-2cca-414d-a62c-2a7a59aa5517-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-mqzm9\" (UID: \"fe43012f-2cca-414d-a62c-2a7a59aa5517\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-mqzm9"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864345    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnphr\" (UniqueName: \"kubernetes.io/projected/407a23dd-cab9-4929-a0e6-d71acc8c10d6-kube-api-access-nnphr\") pod \"coredns-6d4b75cb6d-flh6s\" (UID: \"407a23dd-cab9-4929-a0e6-d71acc8c10d6\") " pod="kube-system/coredns-6d4b75cb6d-flh6s"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864393    9861 reconciler.go:157] "Reconciler: start to sync state"
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:38.015058    9861 request.go:601] Waited for 1.08021755s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:38.022770    9861 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220801173626-13911\" already exists" pod="kube-system/kube-scheduler-no-preload-20220801173626-13911"
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:38.270757    9861 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220801173626-13911\" already exists" pod="kube-system/kube-apiserver-no-preload-20220801173626-13911"
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:38.466659    9861 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220801173626-13911\" already exists" pod="kube-system/etcd-no-preload-20220801173626-13911"
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:38.627346    9861 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220801173626-13911\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220801173626-13911"
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:39.219354    9861 scope.go:110] "RemoveContainer" containerID="be43a37b01ad84b754474a326c889faa74aa5182811427897a0a88adcddda715"
	
	* 
	* ==> kubernetes-dashboard [02205af17d5a] <==
	* 2022/08/02 00:42:48 Using namespace: kubernetes-dashboard
	2022/08/02 00:42:48 Using in-cluster config to connect to apiserver
	2022/08/02 00:42:48 Using secret token for csrf signing
	2022/08/02 00:42:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/08/02 00:42:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/08/02 00:42:48 Successful initial request to the apiserver, version: v1.24.3
	2022/08/02 00:42:48 Generating JWE encryption key
	2022/08/02 00:42:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/08/02 00:42:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/08/02 00:42:48 Initializing JWE encryption key from synchronized object
	2022/08/02 00:42:48 Creating in-cluster Sidecar client
	2022/08/02 00:42:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:42:48 Serving insecurely on HTTP port: 9090
	2022/08/02 00:43:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:42:48 Starting overwatch
	
	* 
	* ==> storage-provisioner [920e63be68c9] <==
	* I0802 00:42:41.365655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 00:42:41.374723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 00:42:41.374759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 00:42:41.381384       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 00:42:41.381498       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220801173626-13911_17d9f30b-bcd3-4a5c-9ad6-b7c1cd62a7bb!
	I0802 00:42:41.383056       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d00c2d0b-a52e-46a1-b0d9-f30e26be90f3", APIVersion:"v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220801173626-13911_17d9f30b-bcd3-4a5c-9ad6-b7c1cd62a7bb became leader
	I0802 00:42:41.482193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220801173626-13911_17d9f30b-bcd3-4a5c-9ad6-b7c1cd62a7bb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220801173626-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-72ccc
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220801173626-13911 describe pod metrics-server-5c6f97fb75-72ccc
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220801173626-13911 describe pod metrics-server-5c6f97fb75-72ccc: exit status 1 (283.392356ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-72ccc" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220801173626-13911 describe pod metrics-server-5c6f97fb75-72ccc: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220801173626-13911
helpers_test.go:235: (dbg) docker inspect no-preload-20220801173626-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c",
	        "Created": "2022-08-02T00:36:28.462022339Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:37:46.382223185Z",
	            "FinishedAt": "2022-08-02T00:37:44.350232944Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c/hostname",
	        "HostsPath": "/var/lib/docker/containers/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c/hosts",
	        "LogPath": "/var/lib/docker/containers/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c/102f6ba38eb42a3338a8d89e6ff97eb7298f6084f4c7255d2a74be23e00d329c-json.log",
	        "Name": "/no-preload-20220801173626-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220801173626-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220801173626-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4bc2a91037ad8ee229bf7d3a0907a2001651ed7982fa85c577929eba6ddd02a6-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4bc2a91037ad8ee229bf7d3a0907a2001651ed7982fa85c577929eba6ddd02a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4bc2a91037ad8ee229bf7d3a0907a2001651ed7982fa85c577929eba6ddd02a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4bc2a91037ad8ee229bf7d3a0907a2001651ed7982fa85c577929eba6ddd02a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220801173626-13911",
	                "Source": "/var/lib/docker/volumes/no-preload-20220801173626-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220801173626-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220801173626-13911",
	                "name.minikube.sigs.k8s.io": "no-preload-20220801173626-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7448afeb5c2dbc9c26c2b32362de1b7224d710927e15d48b41f8303e6786b40f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51290"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51291"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51292"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51293"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51289"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7448afeb5c2d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220801173626-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "102f6ba38eb4",
	                        "no-preload-20220801173626-13911"
	                    ],
	                    "NetworkID": "363df1b6c81b32b4a7ad3992422335fcbb0b1e69be15a3e6ad5758b34c73d5d3",
	                    "EndpointID": "dedb229046ff2716bfa9a4592b609c9537acfee644a7eff4393fb3778238b1fc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220801173626-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220801173626-13911 logs -n 25: (2.780686207s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                   |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                | kubenet-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:28 PDT | 01 Aug 22 17:28 PDT |
	|         | kubenet-20220801171037-13911                      |                                            |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220801171037-13911               | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:29 PDT |
	|         | kubenet-20220801171037-13911                      |                                            |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:29 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:30 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:30 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:31 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT | 01 Aug 22 17:33 PDT |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801172716-13911       | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                            |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                            |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                            |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                            |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                            |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                            |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911           | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                            |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220801173625-13911 | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | disable-driver-mounts-20220801173625-13911        |                                            |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                            |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                            |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                            |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                            |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:42 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                            |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                            |         |         |                     |                     |
	|         | --driver=docker                                   |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                            |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                            |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220801173626-13911            | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                            |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                            |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:37:45
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:37:45.136795   31047 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:37:45.137023   31047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:37:45.137028   31047 out.go:309] Setting ErrFile to fd 2...
	I0801 17:37:45.137032   31047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:37:45.137145   31047 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:37:45.137612   31047 out.go:303] Setting JSON to false
	I0801 17:37:45.152591   31047 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9436,"bootTime":1659391229,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:37:45.152701   31047 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:37:45.174344   31047 out.go:177] * [no-preload-20220801173626-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:37:45.196180   31047 notify.go:193] Checking for updates...
	I0801 17:37:45.217756   31047 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:37:45.238861   31047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:37:45.260039   31047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:37:45.280936   31047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:37:45.302202   31047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:37:45.324757   31047 config.go:180] Loaded profile config "no-preload-20220801173626-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:37:45.325426   31047 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:37:45.394757   31047 docker.go:137] docker version: linux-20.10.17
	I0801 17:37:45.394914   31047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:37:45.527503   31047 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:37:45.457586218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:37:45.571140   31047 out.go:177] * Using the docker driver based on existing profile
	I0801 17:37:45.592082   31047 start.go:284] selected driver: docker
	I0801 17:37:45.592099   31047 start.go:808] validating driver "docker" against &{Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:45.592198   31047 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:37:45.594452   31047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:37:45.733083   31047 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:37:45.664473823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:37:45.733245   31047 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:37:45.733262   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:37:45.733271   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:37:45.733294   31047 start_flags.go:310] config:
	{Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:45.777022   31047 out.go:177] * Starting control plane node no-preload-20220801173626-13911 in cluster no-preload-20220801173626-13911
	I0801 17:37:45.799262   31047 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:37:45.820970   31047 out.go:177] * Pulling base image ...
	I0801 17:37:45.842197   31047 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:37:45.842217   31047 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:37:45.842421   31047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/config.json ...
	I0801 17:37:45.842537   31047 cache.go:107] acquiring lock: {Name:mkce27c207a7bf01881de4cf2e18a8ec061785d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.842574   31047 cache.go:107] acquiring lock: {Name:mk33f064d166c5a0dc9a025cb9d5db4a25dde34f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.843994   31047 cache.go:107] acquiring lock: {Name:mk83ada496db165959cae463687f409b745fe431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844359   31047 cache.go:107] acquiring lock: {Name:mk1a37bbfd8a0fda4175037a2df9b28a8bff25fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844423   31047 cache.go:107] acquiring lock: {Name:mk8f04950ca6b931221e073d61c347db62721cdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844390   31047 cache.go:107] acquiring lock: {Name:mk885468f27c8850bc0b7933d3a2ff478aab774d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844464   31047 cache.go:107] acquiring lock: {Name:mk3407b9bf31dee0ad589c69c26f0a179fd3a6e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.844507   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 exists
	I0801 17:37:45.844473   31047 cache.go:107] acquiring lock: {Name:mk8a29c24e1671055af457da8f29bfaf97f492d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.845147   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 exists
	I0801 17:37:45.845108   31047 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3" took 1.980679ms
	I0801 17:37:45.844483   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0801 17:37:45.845289   31047 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 succeeded
	I0801 17:37:45.845305   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 exists
	I0801 17:37:45.845302   31047 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.789096ms
	I0801 17:37:45.845308   31047 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0" took 2.704085ms
	I0801 17:37:45.845327   31047 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0801 17:37:45.845337   31047 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 succeeded
	I0801 17:37:45.845331   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 exists
	I0801 17:37:45.845313   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 exists
	I0801 17:37:45.845364   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 exists
	I0801 17:37:45.845372   31047 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3" took 1.105087ms
	I0801 17:37:45.845382   31047 cache.go:96] cache image "k8s.gcr.io/pause:3.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7" took 1.189591ms
	I0801 17:37:45.845390   31047 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 succeeded
	I0801 17:37:45.845393   31047 cache.go:80] save to tar file k8s.gcr.io/pause:3.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 succeeded
	I0801 17:37:45.845393   31047 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3" took 1.139616ms
	I0801 17:37:45.845347   31047 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0801 17:37:45.845416   31047 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 succeeded
	I0801 17:37:45.845331   31047 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.24.3" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3" took 1.102183ms
	I0801 17:37:45.845430   31047 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.24.3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 succeeded
	I0801 17:37:45.845426   31047 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 1.373082ms
	I0801 17:37:45.845440   31047 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0801 17:37:45.845462   31047 cache.go:87] Successfully saved all images to host disk.
	I0801 17:37:45.908069   31047 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:37:45.908096   31047 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:37:45.908107   31047 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:37:45.908147   31047 start.go:371] acquiring machines lock for no-preload-20220801173626-13911: {Name:mkda6e117952af39a3874882bbd203241b49719c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:37:45.908210   31047 start.go:375] acquired machines lock for "no-preload-20220801173626-13911" in 52.481µs
	I0801 17:37:45.908230   31047 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:37:45.908238   31047 fix.go:55] fixHost starting: 
	I0801 17:37:45.908457   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:37:45.974772   31047 fix.go:103] recreateIfNeeded on no-preload-20220801173626-13911: state=Stopped err=<nil>
	W0801 17:37:45.974798   31047 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:37:45.996880   31047 out.go:177] * Restarting existing docker container for "no-preload-20220801173626-13911" ...
	I0801 17:37:46.018574   31047 cli_runner.go:164] Run: docker start no-preload-20220801173626-13911
	I0801 17:37:46.384675   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:37:46.457749   31047 kic.go:415] container "no-preload-20220801173626-13911" state is running.
	I0801 17:37:46.458352   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:46.531639   31047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/config.json ...
	I0801 17:37:46.532029   31047 machine.go:88] provisioning docker machine ...
	I0801 17:37:46.532061   31047 ubuntu.go:169] provisioning hostname "no-preload-20220801173626-13911"
	I0801 17:37:46.532140   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:46.605057   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:46.605254   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:46.605270   31047 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220801173626-13911 && echo "no-preload-20220801173626-13911" | sudo tee /etc/hostname
	I0801 17:37:46.733056   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220801173626-13911
	
	I0801 17:37:46.733140   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:46.805118   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:46.805272   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:46.805287   31047 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220801173626-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220801173626-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220801173626-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:37:46.917485   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:37:46.917506   31047 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:37:46.917535   31047 ubuntu.go:177] setting up certificates
	I0801 17:37:46.917541   31047 provision.go:83] configureAuth start
	I0801 17:37:46.917615   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:46.990412   31047 provision.go:138] copyHostCerts
	I0801 17:37:46.990491   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:37:46.990502   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:37:46.990596   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:37:46.990798   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:37:46.990808   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:37:46.990864   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:37:46.991000   31047 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:37:46.991007   31047 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:37:46.991062   31047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:37:46.991772   31047 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220801173626-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220801173626-13911]
	I0801 17:37:47.183740   31047 provision.go:172] copyRemoteCerts
	I0801 17:37:47.183812   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:37:47.183860   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.256107   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:47.339121   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:37:47.356831   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0801 17:37:47.373830   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 17:37:47.392418   31047 provision.go:86] duration metric: configureAuth took 474.857796ms
	I0801 17:37:47.392433   31047 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:37:47.392595   31047 config.go:180] Loaded profile config "no-preload-20220801173626-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:37:47.392663   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.464884   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.465036   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.465047   31047 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:37:47.579712   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:37:47.579729   31047 ubuntu.go:71] root file system type: overlay
	I0801 17:37:47.579870   31047 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:37:47.579944   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.650983   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.651127   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.651186   31047 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:37:47.774346   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:37:47.774436   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:47.845704   31047 main.go:134] libmachine: Using SSH client type: native
	I0801 17:37:47.845865   31047 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51290 <nil> <nil>}
	I0801 17:37:47.845879   31047 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:37:47.964006   31047 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:37:47.964021   31047 machine.go:91] provisioned docker machine in 1.43196114s
	I0801 17:37:47.964037   31047 start.go:307] post-start starting for "no-preload-20220801173626-13911" (driver="docker")
	I0801 17:37:47.964043   31047 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:37:47.964117   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:37:47.964170   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.035712   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.118288   31047 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:37:48.121549   31047 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:37:48.121566   31047 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:37:48.121586   31047 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:37:48.121595   31047 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:37:48.121603   31047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:37:48.121710   31047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:37:48.121847   31047 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:37:48.121999   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:37:48.129029   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:37:48.146801   31047 start.go:310] post-start completed in 182.747614ms
	I0801 17:37:48.146864   31047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:37:48.146917   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.217007   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.300445   31047 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:37:48.304748   31047 fix.go:57] fixHost completed within 2.396472477s
	I0801 17:37:48.304758   31047 start.go:82] releasing machines lock for "no-preload-20220801173626-13911", held for 2.39650437s
	I0801 17:37:48.304820   31047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220801173626-13911
	I0801 17:37:48.374117   31047 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:37:48.374143   31047 ssh_runner.go:195] Run: systemctl --version
	I0801 17:37:48.374196   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.374212   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:48.449727   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.451539   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:37:48.719080   31047 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:37:48.729189   31047 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:37:48.729244   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:37:48.740655   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:37:48.753772   31047 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:37:48.824006   31047 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:37:48.896529   31047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:37:48.963357   31047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:37:49.205490   31047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:37:49.268926   31047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:37:49.323147   31047 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:37:49.332627   31047 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:37:49.332704   31047 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:37:49.336848   31047 start.go:471] Will wait 60s for crictl version
	I0801 17:37:49.336901   31047 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:37:49.441376   31047 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:37:49.441442   31047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:37:49.478518   31047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:37:49.557572   31047 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:37:49.557790   31047 cli_runner.go:164] Run: docker exec -t no-preload-20220801173626-13911 dig +short host.docker.internal
	I0801 17:37:49.686230   31047 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:37:49.686336   31047 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:37:49.690942   31047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:37:49.700964   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:49.771329   31047 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:37:49.771383   31047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:37:49.802366   31047 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:37:49.802385   31047 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:37:49.802458   31047 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:37:49.879052   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:37:49.879064   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:37:49.879080   31047 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:37:49.879096   31047 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220801173626-13911 NodeName:no-preload-20220801173626-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:37:49.879194   31047 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "no-preload-20220801173626-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:37:49.879290   31047 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=no-preload-20220801173626-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:37:49.879351   31047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:37:49.887424   31047 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:37:49.887487   31047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:37:49.894755   31047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (493 bytes)
	I0801 17:37:49.908266   31047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:37:49.920870   31047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0801 17:37:49.933830   31047 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:37:49.937511   31047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:37:49.946559   31047 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911 for IP: 192.168.67.2
	I0801 17:37:49.946659   31047 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:37:49.946707   31047 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:37:49.946786   31047 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.key
	I0801 17:37:49.946845   31047 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.key.c7fa3a9e
	I0801 17:37:49.946897   31047 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.key
	I0801 17:37:49.947100   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:37:49.947138   31047 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:37:49.947151   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:37:49.947189   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:37:49.947218   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:37:49.947250   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:37:49.947309   31047 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:37:49.947829   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:37:49.964521   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 17:37:49.981144   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:37:49.997236   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 17:37:50.014091   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:37:50.030809   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:37:50.047089   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:37:50.063912   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:37:50.082297   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:37:50.101186   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:37:50.118882   31047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:37:50.136291   31047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:37:50.149676   31047 ssh_runner.go:195] Run: openssl version
	I0801 17:37:50.163581   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:37:50.171105   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.174935   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.174989   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:37:50.179840   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:37:50.186763   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:37:50.194343   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.198345   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.198395   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:37:50.203934   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:37:50.210838   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:37:50.218583   31047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.222458   31047 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.222498   31047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:37:50.227505   31047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:37:50.234458   31047 kubeadm.go:395] StartCluster: {Name:no-preload-20220801173626-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:no-preload-20220801173626-13911 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:37:50.234558   31047 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:37:50.264051   31047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:37:50.271634   31047 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:37:50.271652   31047 kubeadm.go:626] restartCluster start
	I0801 17:37:50.271694   31047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:37:50.278298   31047 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.278364   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:37:50.349453   31047 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220801173626-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:37:50.349640   31047 kubeconfig.go:127] "no-preload-20220801173626-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:37:50.349966   31047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:37:50.351119   31047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:37:50.358739   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.358794   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.366952   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.567082   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.567203   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.576999   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.769130   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.769340   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.779725   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:50.969182   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:50.969292   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:50.979800   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.167920   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.168015   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.178836   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.367096   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.367205   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.376391   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.569038   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.569130   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.578185   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.769147   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.769333   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.779768   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:51.967690   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:51.967807   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:51.978203   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.168126   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.168251   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.178788   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.367362   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.367477   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.376348   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.569124   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.569313   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.579843   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.767372   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.767476   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.776970   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:52.968285   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:52.968420   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:52.978224   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.168014   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.168103   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.178218   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.369185   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.369348   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.380616   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.380627   31047 api_server.go:165] Checking apiserver status ...
	I0801 17:37:53.380671   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:37:53.388701   31047 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.388714   31047 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:37:53.388723   31047 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:37:53.388774   31047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:37:53.420707   31047 docker.go:443] Stopping containers: [d5a3d4ccde35 795a7dfc5c0b 9c6c1ed81713 1d852044111d 2f0cbdfcc618 803f6a6ae70d 41e8b95b80bc b8daaea5d97c b53b375d313f be3fbf75c305 482dbbf122e4 5abcdb77ef04 302f547a73d8 5c08de9ffe04 daf4df3d9163 4dd96b3aa0d4]
	I0801 17:37:53.420777   31047 ssh_runner.go:195] Run: docker stop d5a3d4ccde35 795a7dfc5c0b 9c6c1ed81713 1d852044111d 2f0cbdfcc618 803f6a6ae70d 41e8b95b80bc b8daaea5d97c b53b375d313f be3fbf75c305 482dbbf122e4 5abcdb77ef04 302f547a73d8 5c08de9ffe04 daf4df3d9163 4dd96b3aa0d4
	I0801 17:37:53.452120   31047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:37:53.462361   31047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:37:53.469872   31047 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug  2 00:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:36 /etc/kubernetes/scheduler.conf
	
	I0801 17:37:53.469922   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:37:53.477025   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:37:53.483955   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:37:53.490967   31047 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.491012   31047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:37:53.497749   31047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:37:53.504618   31047 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:37:53.504666   31047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:37:53.511317   31047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:37:53.518669   31047 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:37:53.518679   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:53.563806   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.484230   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.652440   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.710862   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:37:54.763698   31047 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:37:54.763766   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.273497   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.775502   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:37:55.820137   31047 api_server.go:71] duration metric: took 1.056421863s to wait for apiserver process to appear ...
	I0801 17:37:55.820154   31047 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:37:55.820168   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:55.821591   31047 api_server.go:256] stopped: https://127.0.0.1:51289/healthz: Get "https://127.0.0.1:51289/healthz": EOF
	I0801 17:37:56.322368   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.001585   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:37:59.001601   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:37:59.323815   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.331879   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:37:59.331896   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:37:59.821943   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:37:59.827351   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:37:59.827368   31047 api_server.go:102] status: https://127.0.0.1:51289/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:38:00.324020   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:38:00.331405   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 200:
	ok
	I0801 17:38:00.337668   31047 api_server.go:140] control plane version: v1.24.3
	I0801 17:38:00.337681   31047 api_server.go:130] duration metric: took 4.517452084s to wait for apiserver health ...
	I0801 17:38:00.337687   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:38:00.337692   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:38:00.337703   31047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:38:00.344812   31047 system_pods.go:59] 8 kube-system pods found
	I0801 17:38:00.344828   31047 system_pods.go:61] "coredns-6d4b75cb6d-qb7sz" [77b59710-ca1b-4065-bf3b-ee7a85c78408] Running
	I0801 17:38:00.344836   31047 system_pods.go:61] "etcd-no-preload-20220801173626-13911" [e7d936e6-08ca-4c1d-99af-689effe61062] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0801 17:38:00.344843   31047 system_pods.go:61] "kube-apiserver-no-preload-20220801173626-13911" [4e6c4e55-cc13-472a-afbe-59a6a2ec20ad] Running
	I0801 17:38:00.344847   31047 system_pods.go:61] "kube-controller-manager-no-preload-20220801173626-13911" [28fbab73-82d5-4181-8471-d287ef555c41] Running
	I0801 17:38:00.344851   31047 system_pods.go:61] "kube-proxy-2spmx" [34f279f3-ae86-4a39-92bc-978b6b6c44fd] Running
	I0801 17:38:00.344855   31047 system_pods.go:61] "kube-scheduler-no-preload-20220801173626-13911" [8b3b67a0-1d6a-454c-85e1-c104c7bff40e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:38:00.344862   31047 system_pods.go:61] "metrics-server-5c6f97fb75-wrh2c" [9d42bee2-4bb9-4237-8444-831f4c65f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:38:00.344866   31047 system_pods.go:61] "storage-provisioner" [dd76b63a-5481-4315-bfbb-d56bd50aef64] Running
	I0801 17:38:00.344870   31047 system_pods.go:74] duration metric: took 7.163598ms to wait for pod list to return data ...
	I0801 17:38:00.344876   31047 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:38:00.347456   31047 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:38:00.347468   31047 node_conditions.go:123] node cpu capacity is 6
	I0801 17:38:00.347477   31047 node_conditions.go:105] duration metric: took 2.59659ms to run NodePressure ...
	I0801 17:38:00.347486   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:38:00.471283   31047 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0801 17:38:00.475832   31047 kubeadm.go:777] kubelet initialised
	I0801 17:38:00.475844   31047 kubeadm.go:778] duration metric: took 4.548844ms waiting for restarted kubelet to initialise ...
	I0801 17:38:00.475851   31047 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:38:00.481039   31047 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:00.486739   31047 pod_ready.go:92] pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:00.486750   31047 pod_ready.go:81] duration metric: took 5.697955ms waiting for pod "coredns-6d4b75cb6d-qb7sz" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:00.486762   31047 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:02.500418   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:05.000962   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:07.001386   31047 pod_ready.go:102] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:08.499575   31047 pod_ready.go:92] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:08.499589   31047 pod_ready.go:81] duration metric: took 8.012693599s waiting for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:08.499595   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:10.513107   31047 pod_ready.go:102] pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:12.510113   31047 pod_ready.go:92] pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:12.510126   31047 pod_ready.go:81] duration metric: took 4.010464323s waiting for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:12.510132   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.022615   31047 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.022629   31047 pod_ready.go:81] duration metric: took 1.512455198s waiting for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.022635   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2spmx" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.026883   31047 pod_ready.go:92] pod "kube-proxy-2spmx" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.026894   31047 pod_ready.go:81] duration metric: took 4.246546ms waiting for pod "kube-proxy-2spmx" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.026900   31047 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.030969   31047 pod_ready.go:92] pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:38:14.030977   31047 pod_ready.go:81] duration metric: took 4.07323ms waiting for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:14.030983   31047 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace to be "Ready" ...
	I0801 17:38:16.041234   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:18.041647   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:20.542837   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:23.041487   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:25.043560   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:27.540915   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:29.543086   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:32.042479   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:34.544640   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:37.044506   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:39.541915   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:41.544271   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:44.041420   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:46.042431   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:48.044498   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:50.543837   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:53.041176   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:55.044380   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:57.541598   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:38:59.545044   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:02.042789   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:04.044739   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:06.541143   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:08.542691   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	W0801 17:39:10.045604   30307 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 17:39:10.045633   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0801 17:39:10.468055   30307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:39:10.477578   30307 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:39:10.477629   30307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:39:10.485644   30307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:39:10.485666   30307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:39:11.219133   30307 out.go:204]   - Generating certificates and keys ...
	I0801 17:39:11.823639   30307 out.go:204]   - Booting up control plane ...
	I0801 17:39:11.042720   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:13.042943   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:15.043258   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:17.043865   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:19.542284   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:21.544438   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:24.040750   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:26.042378   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:28.544524   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:31.041513   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:33.042620   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:35.043058   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:37.543424   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:40.043048   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:42.044820   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:44.541659   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:46.544518   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:49.044133   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:51.543084   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:54.045047   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:56.542567   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:39:58.545088   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:01.043406   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:03.044252   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:05.542151   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:07.543499   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:09.544345   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:12.045195   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:14.542899   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:16.544629   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:18.545930   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:21.044674   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:23.045379   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:25.545385   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:27.545491   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:30.042095   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:32.043488   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:34.548393   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:37.043300   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:39.546662   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:42.044663   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:44.544152   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:46.545544   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:49.042550   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:51.044633   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:53.542274   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:55.543494   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:40:58.043271   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:00.043457   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:02.043870   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:04.044318   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:06.739199   30307 kubeadm.go:397] StartCluster complete in 7m59.637942115s
	I0801 17:41:06.739275   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0801 17:41:06.768243   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.768256   30307 logs.go:276] No container was found matching "kube-apiserver"
	I0801 17:41:06.768314   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0801 17:41:06.798174   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.798186   30307 logs.go:276] No container was found matching "etcd"
	I0801 17:41:06.798242   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0801 17:41:06.827196   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.827207   30307 logs.go:276] No container was found matching "coredns"
	I0801 17:41:06.827266   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0801 17:41:06.857151   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.857164   30307 logs.go:276] No container was found matching "kube-scheduler"
	I0801 17:41:06.857221   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0801 17:41:06.886482   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.886494   30307 logs.go:276] No container was found matching "kube-proxy"
	I0801 17:41:06.886551   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0801 17:41:06.915571   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.915583   30307 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 17:41:06.915642   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0801 17:41:06.946187   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.946200   30307 logs.go:276] No container was found matching "storage-provisioner"
	I0801 17:41:06.946261   30307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0801 17:41:06.976305   30307 logs.go:274] 0 containers: []
	W0801 17:41:06.976317   30307 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 17:41:06.976324   30307 logs.go:123] Gathering logs for container status ...
	I0801 17:41:06.976330   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 17:41:09.033371   30307 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056995262s)
	I0801 17:41:09.033517   30307 logs.go:123] Gathering logs for kubelet ...
	I0801 17:41:09.033529   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0801 17:41:09.074454   30307 logs.go:123] Gathering logs for dmesg ...
	I0801 17:41:09.074467   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 17:41:09.086365   30307 logs.go:123] Gathering logs for describe nodes ...
	I0801 17:41:09.086383   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 17:41:09.139109   30307 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 17:41:09.139121   30307 logs.go:123] Gathering logs for Docker ...
	I0801 17:41:09.139129   30307 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0801 17:41:09.152961   30307 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 17:41:09.152979   30307 out.go:239] * 
	W0801 17:41:09.153075   30307 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:41:09.153105   30307 out.go:239] * 
	W0801 17:41:09.153626   30307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 17:41:09.216113   30307 out.go:177] 
	W0801 17:41:09.258477   30307 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 17:41:09.258605   30307 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 17:41:09.258689   30307 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 17:41:09.300266   30307 out.go:177] 
	I0801 17:41:06.045067   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:08.046647   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:10.542246   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:12.544603   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:15.044082   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:17.045083   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:19.045469   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:21.546117   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:24.043650   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:26.044945   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:28.546922   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:31.044492   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:33.044542   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:35.546423   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:38.043601   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:40.044277   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:42.045851   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:44.548121   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:47.045194   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:49.045814   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:51.546325   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:53.546533   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:56.045181   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:41:58.546908   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:00.547797   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:03.046647   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:05.050012   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:07.547212   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:10.046877   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:12.547856   31047 pod_ready.go:102] pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace has status "Ready":"False"
	I0801 17:42:14.038717   31047 pod_ready.go:81] duration metric: took 4m0.003937501s waiting for pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace to be "Ready" ...
	E0801 17:42:14.038739   31047 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-wrh2c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0801 17:42:14.038757   31047 pod_ready.go:38] duration metric: took 4m13.558984148s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:42:14.038793   31047 kubeadm.go:630] restartCluster took 4m23.763066112s
	W0801 17:42:14.038925   31047 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0801 17:42:14.038954   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0801 17:42:16.390778   31047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.351772731s)
	I0801 17:42:16.390841   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:42:16.400528   31047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:42:16.408180   31047 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:42:16.408221   31047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:42:16.415600   31047 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:42:16.415627   31047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:42:16.699655   31047 out.go:204]   - Generating certificates and keys ...
	I0801 17:42:17.913958   31047 out.go:204]   - Booting up control plane ...
	I0801 17:42:24.461033   31047 out.go:204]   - Configuring RBAC rules ...
	I0801 17:42:24.836074   31047 cni.go:95] Creating CNI manager for ""
	I0801 17:42:24.836086   31047 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:42:24.836103   31047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:42:24.836174   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:24.836186   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=no-preload-20220801173626-13911 minikube.k8s.io/updated_at=2022_08_01T17_42_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:24.981309   31047 ops.go:34] apiserver oom_adj: -16
	I0801 17:42:24.981327   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:25.553761   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:26.053271   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:26.553560   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:27.053994   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:27.555176   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:28.054889   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:28.554418   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:29.053832   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:29.553756   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:30.053329   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:30.555289   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:31.053353   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:31.555295   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:32.053348   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:32.553293   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:33.053654   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:33.555103   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:34.054338   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:34.553949   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:35.053541   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:35.553481   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:36.053315   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:36.553930   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:37.054662   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:37.555373   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:38.054000   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:38.553367   31047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:42:38.617569   31047 kubeadm.go:1045] duration metric: took 13.781234788s to wait for elevateKubeSystemPrivileges.
	I0801 17:42:38.617584   31047 kubeadm.go:397] StartCluster complete in 4m48.37868331s
	I0801 17:42:38.617608   31047 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:42:38.617699   31047 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:42:38.618272   31047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:42:39.133513   31047 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220801173626-13911" rescaled to 1
	I0801 17:42:39.133558   31047 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:42:39.133567   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:42:39.133607   31047 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:42:39.133809   31047 config.go:180] Loaded profile config "no-preload-20220801173626-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:42:39.194376   31047 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220801173626-13911"
	I0801 17:42:39.194376   31047 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220801173626-13911"
	I0801 17:42:39.194385   31047 addons.go:65] Setting dashboard=true in profile "no-preload-20220801173626-13911"
	I0801 17:42:39.194282   31047 out.go:177] * Verifying Kubernetes components...
	I0801 17:42:39.194399   31047 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220801173626-13911"
	I0801 17:42:39.194395   31047 addons.go:65] Setting metrics-server=true in profile "no-preload-20220801173626-13911"
	I0801 17:42:39.231626   31047 addons.go:153] Setting addon metrics-server=true in "no-preload-20220801173626-13911"
	W0801 17:42:39.194408   31047 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:42:39.231645   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0801 17:42:39.231649   31047 addons.go:162] addon metrics-server should already be in state true
	I0801 17:42:39.194411   31047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220801173626-13911"
	I0801 17:42:39.231723   31047 host.go:66] Checking if "no-preload-20220801173626-13911" exists ...
	I0801 17:42:39.231721   31047 host.go:66] Checking if "no-preload-20220801173626-13911" exists ...
	I0801 17:42:39.194424   31047 addons.go:153] Setting addon dashboard=true in "no-preload-20220801173626-13911"
	W0801 17:42:39.231797   31047 addons.go:162] addon dashboard should already be in state true
	I0801 17:42:39.195531   31047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 17:42:39.231863   31047 host.go:66] Checking if "no-preload-20220801173626-13911" exists ...
	I0801 17:42:39.232481   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.232531   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.232583   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.232790   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.255171   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.383513   31047 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 17:42:39.403742   31047 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:42:39.440375   31047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:42:39.410630   31047 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220801173626-13911"
	I0801 17:42:39.440383   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:42:39.458716   31047 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220801173626-13911" to be "Ready" ...
	I0801 17:42:39.477386   31047 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:42:39.514349   31047 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0801 17:42:39.514354   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0801 17:42:39.514369   31047 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:42:39.514430   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.535320   31047 host.go:66] Checking if "no-preload-20220801173626-13911" exists ...
	I0801 17:42:39.514432   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.517175   31047 node_ready.go:49] node "no-preload-20220801173626-13911" has status "Ready":"True"
	I0801 17:42:39.535385   31047 node_ready.go:38] duration metric: took 20.994571ms waiting for node "no-preload-20220801173626-13911" to be "Ready" ...
	I0801 17:42:39.556673   31047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:42:39.538335   31047 cli_runner.go:164] Run: docker container inspect no-preload-20220801173626-13911 --format={{.State.Status}}
	I0801 17:42:39.556730   31047 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:42:39.567415   31047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-2nn4d" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:39.594427   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:42:39.594445   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:42:39.594521   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.622346   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:42:39.627626   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:42:39.645260   31047 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:42:39.645275   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:42:39.645335   31047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801173626-13911
	I0801 17:42:39.682223   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:42:39.729412   31047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:42:39.732366   31047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51290 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801173626-13911/id_rsa Username:docker}
	I0801 17:42:39.736209   31047 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:42:39.736225   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:42:39.809142   31047 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:42:39.809166   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:42:39.823414   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:42:39.823426   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:42:39.827731   31047 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:42:39.827742   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:42:39.841535   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:42:39.841548   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:42:39.848769   31047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:42:39.919527   31047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:42:39.919701   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:42:39.919714   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:42:39.935283   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:42:39.935295   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:42:39.954641   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:42:39.954657   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:42:40.030284   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:42:40.030308   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:42:40.117562   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:42:40.117580   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:42:40.138635   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:42:40.138655   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:42:40.225262   31047 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:42:40.225281   31047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:42:40.310238   31047 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.078352698s)
	I0801 17:42:40.310268   31047 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0801 17:42:40.324674   31047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:42:40.509435   31047 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220801173626-13911"
	I0801 17:42:40.610125   31047 pod_ready.go:92] pod "coredns-6d4b75cb6d-2nn4d" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:40.610138   31047 pod_ready.go:81] duration metric: took 1.015766318s waiting for pod "coredns-6d4b75cb6d-2nn4d" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:40.610146   31047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-flh6s" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:41.180477   31047 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:42:41.253292   31047 addons.go:414] enableAddons completed in 2.119666964s
	I0801 17:42:42.621261   31047 pod_ready.go:92] pod "coredns-6d4b75cb6d-flh6s" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.621274   31047 pod_ready.go:81] duration metric: took 2.01109299s waiting for pod "coredns-6d4b75cb6d-flh6s" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.621280   31047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.626750   31047 pod_ready.go:92] pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.626758   31047 pod_ready.go:81] duration metric: took 5.464362ms waiting for pod "etcd-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.626764   31047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.631106   31047 pod_ready.go:92] pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.631114   31047 pod_ready.go:81] duration metric: took 4.345723ms waiting for pod "kube-apiserver-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.631119   31047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.635093   31047 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.635100   31047 pod_ready.go:81] duration metric: took 3.976528ms waiting for pod "kube-controller-manager-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.635106   31047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gpjj" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.719971   31047 pod_ready.go:92] pod "kube-proxy-8gpjj" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:42.719986   31047 pod_ready.go:81] duration metric: took 84.874465ms waiting for pod "kube-proxy-8gpjj" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:42.719994   31047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:43.119014   31047 pod_ready.go:92] pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:42:43.119025   31047 pod_ready.go:81] duration metric: took 399.010321ms waiting for pod "kube-scheduler-no-preload-20220801173626-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:42:43.119030   31047 pod_ready.go:38] duration metric: took 3.562189656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:42:43.119042   31047 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:42:43.119083   31047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:42:43.131288   31047 api_server.go:71] duration metric: took 3.997644423s to wait for apiserver process to appear ...
	I0801 17:42:43.131304   31047 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:42:43.131313   31047 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51289/healthz ...
	I0801 17:42:43.136512   31047 api_server.go:266] https://127.0.0.1:51289/healthz returned 200:
	ok
	I0801 17:42:43.137662   31047 api_server.go:140] control plane version: v1.24.3
	I0801 17:42:43.137671   31047 api_server.go:130] duration metric: took 6.363278ms to wait for apiserver health ...
	I0801 17:42:43.137676   31047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:42:43.320862   31047 system_pods.go:59] 9 kube-system pods found
	I0801 17:42:43.320876   31047 system_pods.go:61] "coredns-6d4b75cb6d-2nn4d" [d263f6f8-04c0-4226-8cd8-34ac2f30b95e] Running
	I0801 17:42:43.320880   31047 system_pods.go:61] "coredns-6d4b75cb6d-flh6s" [407a23dd-cab9-4929-a0e6-d71acc8c10d6] Running
	I0801 17:42:43.320883   31047 system_pods.go:61] "etcd-no-preload-20220801173626-13911" [aa8583d7-65f3-4c5b-adb8-42f87101c146] Running
	I0801 17:42:43.320886   31047 system_pods.go:61] "kube-apiserver-no-preload-20220801173626-13911" [28218c4f-a48c-4d52-9468-1b2e099be70e] Running
	I0801 17:42:43.320890   31047 system_pods.go:61] "kube-controller-manager-no-preload-20220801173626-13911" [38acf19f-e4cd-4018-88ec-8aeedb05a86c] Running
	I0801 17:42:43.320894   31047 system_pods.go:61] "kube-proxy-8gpjj" [24c63150-6434-42c5-abeb-967bd7e0a8b7] Running
	I0801 17:42:43.320898   31047 system_pods.go:61] "kube-scheduler-no-preload-20220801173626-13911" [1e9907bf-bfae-424d-90f1-2bbf4546559c] Running
	I0801 17:42:43.320904   31047 system_pods.go:61] "metrics-server-5c6f97fb75-72ccc" [cde81437-6354-4b6e-97b2-71da55220f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:42:43.320909   31047 system_pods.go:61] "storage-provisioner" [49990de8-bf79-4a9d-99dd-91cddb6b9f68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:42:43.320913   31047 system_pods.go:74] duration metric: took 183.231196ms to wait for pod list to return data ...
	I0801 17:42:43.320918   31047 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:42:43.518513   31047 default_sa.go:45] found service account: "default"
	I0801 17:42:43.518527   31047 default_sa.go:55] duration metric: took 197.601836ms for default service account to be created ...
	I0801 17:42:43.518534   31047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 17:42:43.721993   31047 system_pods.go:86] 9 kube-system pods found
	I0801 17:42:43.722006   31047 system_pods.go:89] "coredns-6d4b75cb6d-2nn4d" [d263f6f8-04c0-4226-8cd8-34ac2f30b95e] Running
	I0801 17:42:43.722011   31047 system_pods.go:89] "coredns-6d4b75cb6d-flh6s" [407a23dd-cab9-4929-a0e6-d71acc8c10d6] Running
	I0801 17:42:43.722014   31047 system_pods.go:89] "etcd-no-preload-20220801173626-13911" [aa8583d7-65f3-4c5b-adb8-42f87101c146] Running
	I0801 17:42:43.722020   31047 system_pods.go:89] "kube-apiserver-no-preload-20220801173626-13911" [28218c4f-a48c-4d52-9468-1b2e099be70e] Running
	I0801 17:42:43.722024   31047 system_pods.go:89] "kube-controller-manager-no-preload-20220801173626-13911" [38acf19f-e4cd-4018-88ec-8aeedb05a86c] Running
	I0801 17:42:43.722041   31047 system_pods.go:89] "kube-proxy-8gpjj" [24c63150-6434-42c5-abeb-967bd7e0a8b7] Running
	I0801 17:42:43.722048   31047 system_pods.go:89] "kube-scheduler-no-preload-20220801173626-13911" [1e9907bf-bfae-424d-90f1-2bbf4546559c] Running
	I0801 17:42:43.722055   31047 system_pods.go:89] "metrics-server-5c6f97fb75-72ccc" [cde81437-6354-4b6e-97b2-71da55220f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:42:43.722061   31047 system_pods.go:89] "storage-provisioner" [49990de8-bf79-4a9d-99dd-91cddb6b9f68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:42:43.722066   31047 system_pods.go:126] duration metric: took 203.525763ms to wait for k8s-apps to be running ...
	I0801 17:42:43.722071   31047 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 17:42:43.722121   31047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:42:43.731987   31047 system_svc.go:56] duration metric: took 9.909058ms WaitForService to wait for kubelet.
	I0801 17:42:43.732002   31047 kubeadm.go:572] duration metric: took 4.598353134s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 17:42:43.732016   31047 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:42:43.918004   31047 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:42:43.918015   31047 node_conditions.go:123] node cpu capacity is 6
	I0801 17:42:43.918022   31047 node_conditions.go:105] duration metric: took 185.999582ms to run NodePressure ...
	I0801 17:42:43.918030   31047 start.go:216] waiting for startup goroutines ...
	I0801 17:42:43.947785   31047 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:42:43.971474   31047 out.go:177] * Done! kubectl is now configured to use "no-preload-20220801173626-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:37:46 UTC, end at Tue 2022-08-02 00:43:42 UTC. --
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.479119210Z" level=info msg="ignoring event" container=5f322afed57b554a56eb9629c7b882a66372b8488c785d608726cefb04b1cabf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.551283355Z" level=info msg="ignoring event" container=1565a682f8d571eb0fdb9122caedf952e3512a46a8f1868cb4c221144d1c4773 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.621890660Z" level=info msg="ignoring event" container=3cda2e01c5e7d18165e3697ad47d1fadf4dca4c9a0f56d763cdbe3357d185b20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.688767638Z" level=info msg="ignoring event" container=5b2a3d9c587aa6ffb8011312cc17ed4f982453b89fb97af0c2062a3259696005 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.757013939Z" level=info msg="ignoring event" container=97cf250f960bc11a02007bc871a7ea1423a7b2c88ae2edcddf79cc04f1580400 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.839932509Z" level=info msg="ignoring event" container=90b26d903a5670de69e207d3d6af0a47eb9deca4d893597a2736072d0d0597dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:15 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:15.912026380Z" level=info msg="ignoring event" container=20313b3bb3ecc6abb742caab6502c6265708db21ee17d5f7f5d71fea7c30b406 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:16 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:16.069021866Z" level=info msg="ignoring event" container=1ce596db9881edf6ae7cdf7353bb48e4f85765e5b83d21521a15074829e0bcd7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:41 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:41.615507009Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:41 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:41.615587493Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:41 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:41.616730287Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:42 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:42.497805519Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Aug 02 00:42:45 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:45.254486611Z" level=info msg="ignoring event" container=0e4761b6e3df9ac1035db0952bf76f264f704a88b1c2b4108d43421c35a51e1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:45 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:45.343377570Z" level=info msg="ignoring event" container=b6b3f79cecce51d4c63a43d069266fa357f67300c53c55d92d8b364c023cc565 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:49 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:49.020610168Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:42:49 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:49.317177820Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:42:52 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:52.755468097Z" level=info msg="ignoring event" container=23139eb9793cf3cd28da92b5350d701293390675f91d5dfcd263e29f855047c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:53 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:53.487012230Z" level=info msg="ignoring event" container=be43a37b01ad84b754474a326c889faa74aa5182811427897a0a88adcddda715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:42:53 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:53.855282432Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:53 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:53.855324514Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:42:53 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:42:53.856689403Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:43:39 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:43:39.658073397Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:43:39 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:43:39.658102264Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:43:39 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:43:39.659689844Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:43:39 no-preload-20220801173626-13911 dockerd[562]: time="2022-08-02T00:43:39.667456113Z" level=info msg="ignoring event" container=2d9d07f2fb8f064677321e9372ea4898a7ad3cae1821b4895000f3c51e39de84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	2d9d07f2fb8f0       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   32de5be36ea76
	02205af17d5ad       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   54 seconds ago       Running             kubernetes-dashboard        0                   7282bf38a77f6
	920e63be68c91       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   e575763a688ef
	e7d06fe14e5ed       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   eb2c1473ce9f8
	aae668aee0c10       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   368789abfa3d0
	57cd3a4eda12e       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   64a5b6d6e9f16
	e37f8ad07936c       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   425c2334c140a
	a16779ed9225f       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   c358be1684e8c
	45c9b744a52c7       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   aa22035d7e8a4
	
	* 
	* ==> coredns [e7d06fe14e5e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220801173626-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220801173626-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=no-preload-20220801173626-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_42_24_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:42:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220801173626-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:43:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:43:35 +0000   Tue, 02 Aug 2022 00:42:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:43:35 +0000   Tue, 02 Aug 2022 00:42:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:43:35 +0000   Tue, 02 Aug 2022 00:42:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Aug 2022 00:43:35 +0000   Tue, 02 Aug 2022 00:43:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220801173626-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                c6e3aa28-a480-4c1e-a554-33bdfd25fbc9
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-flh6s                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-no-preload-20220801173626-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-no-preload-20220801173626-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-no-preload-20220801173626-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-8gpjj                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-no-preload-20220801173626-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 metrics-server-5c6f97fb75-72ccc                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-mqzm9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-j8vz8                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 64s   kube-proxy       
	  Normal  Starting                 78s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s   kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s   kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s   kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasSufficientPID
	  Normal  NodeReady                78s   kubelet          Node no-preload-20220801173626-13911 status is now: NodeReady
	  Normal  RegisteredNode           65s   node-controller  Node no-preload-20220801173626-13911 event: Registered Node no-preload-20220801173626-13911 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeNotReady
	  Normal  NodeReady                7s    kubelet          Node no-preload-20220801173626-13911 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [57cd3a4eda12] <==
	* {"level":"info","ts":"2022-08-02T00:42:19.358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-08-02T00:42:19.358Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-08-02T00:42:19.360Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:42:19.362Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:42:19.362Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:42:19.362Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:42:19.362Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:42:19.656Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:42:19.661Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:42:19.660Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220801173626-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:42:19.661Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:42:19.662Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:42:19.662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:43:43 up  1:08,  0 users,  load average: 0.50, 0.58, 0.86
	Linux no-preload-20220801173626-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a16779ed9225] <==
	* I0802 00:42:23.955220       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:42:24.664021       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:42:24.669526       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0802 00:42:24.676743       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:42:24.767664       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:42:38.017998       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0802 00:42:38.115879       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0802 00:42:38.727186       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:42:40.444991       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.106.110.242]
	I0802 00:42:41.119159       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.97.206.177]
	I0802 00:42:41.132584       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.165.0]
	W0802 00:42:41.328752       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:42:41.328828       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:42:41.328836       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:42:41.328883       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:42:41.328921       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:42:41.330429       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:43:41.287159       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:43:41.287223       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:43:41.287231       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:43:41.289556       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:43:41.289638       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:43:41.289664       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [45c9b744a52c] <==
	* I0802 00:42:38.373808       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-flh6s"
	I0802 00:42:38.635752       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0802 00:42:38.638585       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-2nn4d"
	I0802 00:42:40.318074       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0802 00:42:40.322506       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0802 00:42:40.333306       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0802 00:42:40.341220       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-72ccc"
	I0802 00:42:40.968128       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0802 00:42:40.973909       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:42:40.975446       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0802 00:42:40.977539       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:40.977753       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:42:40.980671       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:40.980734       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:42:40.982417       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:42:41.012520       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:41.012613       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:42:41.014333       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:41.014436       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:42:41.018478       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:42:41.018510       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:42:41.029020       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-j8vz8"
	I0802 00:42:41.038689       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-mqzm9"
	E0802 00:43:35.316822       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0802 00:43:35.389115       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [aae668aee0c1] <==
	* I0802 00:42:38.684222       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:42:38.684397       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:42:38.684420       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:42:38.722833       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:42:38.723091       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:42:38.723121       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:42:38.723134       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:42:38.723265       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:42:38.724548       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:42:38.724687       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:42:38.724693       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:42:38.725251       1 config.go:444] "Starting node config controller"
	I0802 00:42:38.725260       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:42:38.725279       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:42:38.725283       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:42:38.725297       1 config.go:317] "Starting service config controller"
	I0802 00:42:38.725302       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:42:38.825877       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:42:38.825942       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 00:42:38.825994       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e37f8ad07936] <==
	* W0802 00:42:21.859061       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 00:42:21.859069       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 00:42:21.859125       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 00:42:21.859190       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 00:42:21.859620       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:42:21.859683       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:42:21.859989       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 00:42:21.860011       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 00:42:21.860192       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 00:42:21.860226       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 00:42:21.860183       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 00:42:21.860309       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 00:42:22.730936       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 00:42:22.730984       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 00:42:22.764306       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 00:42:22.764372       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 00:42:22.795350       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 00:42:22.795443       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 00:42:22.813104       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 00:42:22.813231       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 00:42:22.919962       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:42:22.920050       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 00:42:22.935072       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 00:42:22.935179       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0802 00:42:23.454947       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:37:46 UTC, end at Tue 2022-08-02 00:43:43 UTC. --
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863782    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cde81437-6354-4b6e-97b2-71da55220f7d-tmp-dir\") pod \"metrics-server-5c6f97fb75-72ccc\" (UID: \"cde81437-6354-4b6e-97b2-71da55220f7d\") " pod="kube-system/metrics-server-5c6f97fb75-72ccc"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863804    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24c63150-6434-42c5-abeb-967bd7e0a8b7-kube-proxy\") pod \"kube-proxy-8gpjj\" (UID: \"24c63150-6434-42c5-abeb-967bd7e0a8b7\") " pod="kube-system/kube-proxy-8gpjj"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863880    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24c63150-6434-42c5-abeb-967bd7e0a8b7-lib-modules\") pod \"kube-proxy-8gpjj\" (UID: \"24c63150-6434-42c5-abeb-967bd7e0a8b7\") " pod="kube-system/kube-proxy-8gpjj"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863922    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gpn4\" (UniqueName: \"kubernetes.io/projected/49990de8-bf79-4a9d-99dd-91cddb6b9f68-kube-api-access-6gpn4\") pod \"storage-provisioner\" (UID: \"49990de8-bf79-4a9d-99dd-91cddb6b9f68\") " pod="kube-system/storage-provisioner"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.863938    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5d36ef6e-3081-4a75-a775-d906fc182113-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-j8vz8\" (UID: \"5d36ef6e-3081-4a75-a775-d906fc182113\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-j8vz8"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864008    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t77q\" (UniqueName: \"kubernetes.io/projected/5d36ef6e-3081-4a75-a775-d906fc182113-kube-api-access-9t77q\") pod \"kubernetes-dashboard-5fd5574d9f-j8vz8\" (UID: \"5d36ef6e-3081-4a75-a775-d906fc182113\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-j8vz8"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864050    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/49990de8-bf79-4a9d-99dd-91cddb6b9f68-tmp\") pod \"storage-provisioner\" (UID: \"49990de8-bf79-4a9d-99dd-91cddb6b9f68\") " pod="kube-system/storage-provisioner"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864065    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24c63150-6434-42c5-abeb-967bd7e0a8b7-xtables-lock\") pod \"kube-proxy-8gpjj\" (UID: \"24c63150-6434-42c5-abeb-967bd7e0a8b7\") " pod="kube-system/kube-proxy-8gpjj"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864142    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/407a23dd-cab9-4929-a0e6-d71acc8c10d6-config-volume\") pod \"coredns-6d4b75cb6d-flh6s\" (UID: \"407a23dd-cab9-4929-a0e6-d71acc8c10d6\") " pod="kube-system/coredns-6d4b75cb6d-flh6s"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864212    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fe43012f-2cca-414d-a62c-2a7a59aa5517-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-mqzm9\" (UID: \"fe43012f-2cca-414d-a62c-2a7a59aa5517\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-mqzm9"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864345    9861 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnphr\" (UniqueName: \"kubernetes.io/projected/407a23dd-cab9-4929-a0e6-d71acc8c10d6-kube-api-access-nnphr\") pod \"coredns-6d4b75cb6d-flh6s\" (UID: \"407a23dd-cab9-4929-a0e6-d71acc8c10d6\") " pod="kube-system/coredns-6d4b75cb6d-flh6s"
	Aug 02 00:43:36 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:36.864393    9861 reconciler.go:157] "Reconciler: start to sync state"
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:38.015058    9861 request.go:601] Waited for 1.08021755s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:38.022770    9861 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220801173626-13911\" already exists" pod="kube-system/kube-scheduler-no-preload-20220801173626-13911"
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:38.270757    9861 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220801173626-13911\" already exists" pod="kube-system/kube-apiserver-no-preload-20220801173626-13911"
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:38.466659    9861 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220801173626-13911\" already exists" pod="kube-system/etcd-no-preload-20220801173626-13911"
	Aug 02 00:43:38 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:38.627346    9861 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220801173626-13911\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220801173626-13911"
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:39.219354    9861 scope.go:110] "RemoveContainer" containerID="be43a37b01ad84b754474a326c889faa74aa5182811427897a0a88adcddda715"
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:39.660116    9861 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:39.660174    9861 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:39.660331    9861 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vlcnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:
[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-72ccc_kube-system(cde81437-6354-4b6e-97b2-71da55220f7d): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:39.660380    9861 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-72ccc" podUID=cde81437-6354-4b6e-97b2-71da55220f7d
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:39.957114    9861 scope.go:110] "RemoveContainer" containerID="be43a37b01ad84b754474a326c889faa74aa5182811427897a0a88adcddda715"
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: I0802 00:43:39.957467    9861 scope.go:110] "RemoveContainer" containerID="2d9d07f2fb8f064677321e9372ea4898a7ad3cae1821b4895000f3c51e39de84"
	Aug 02 00:43:39 no-preload-20220801173626-13911 kubelet[9861]: E0802 00:43:39.957674    9861 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-mqzm9_kubernetes-dashboard(fe43012f-2cca-414d-a62c-2a7a59aa5517)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-mqzm9" podUID=fe43012f-2cca-414d-a62c-2a7a59aa5517
	
	* 
	* ==> kubernetes-dashboard [02205af17d5a] <==
	* 2022/08/02 00:42:48 Using namespace: kubernetes-dashboard
	2022/08/02 00:42:48 Using in-cluster config to connect to apiserver
	2022/08/02 00:42:48 Using secret token for csrf signing
	2022/08/02 00:42:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/08/02 00:42:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/08/02 00:42:48 Successful initial request to the apiserver, version: v1.24.3
	2022/08/02 00:42:48 Generating JWE encryption key
	2022/08/02 00:42:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/08/02 00:42:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/08/02 00:42:48 Initializing JWE encryption key from synchronized object
	2022/08/02 00:42:48 Creating in-cluster Sidecar client
	2022/08/02 00:42:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:42:48 Serving insecurely on HTTP port: 9090
	2022/08/02 00:43:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:42:48 Starting overwatch
	
	* 
	* ==> storage-provisioner [920e63be68c9] <==
	* I0802 00:42:41.365655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 00:42:41.374723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 00:42:41.374759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 00:42:41.381384       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 00:42:41.381498       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220801173626-13911_17d9f30b-bcd3-4a5c-9ad6-b7c1cd62a7bb!
	I0802 00:42:41.383056       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d00c2d0b-a52e-46a1-b0d9-f30e26be90f3", APIVersion:"v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220801173626-13911_17d9f30b-bcd3-4a5c-9ad6-b7c1cd62a7bb became leader
	I0802 00:42:41.482193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220801173626-13911_17d9f30b-bcd3-4a5c-9ad6-b7c1cd62a7bb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220801173626-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-72ccc
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220801173626-13911 describe pod metrics-server-5c6f97fb75-72ccc
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220801173626-13911 describe pod metrics-server-5c6f97fb75-72ccc: exit status 1 (304.37911ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-72ccc" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220801173626-13911 describe pod metrics-server-5c6f97fb75-72ccc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (43.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (43.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220801174348-13911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911: exit status 2 (16.102837105s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911: exit status 2 (16.108400567s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220801174348-13911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220801174348-13911
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220801174348-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63",
	        "Created": "2022-08-02T00:43:55.412589795Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:45:08.503134427Z",
	            "FinishedAt": "2022-08-02T00:45:06.545740921Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63/hosts",
	        "LogPath": "/var/lib/docker/containers/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63-json.log",
	        "Name": "/default-k8s-different-port-20220801174348-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220801174348-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220801174348-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62b4ab3a37e12ec7b8a73bd4e3f08a6635b56897f6a52e50a66b843e800c9075-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62b4ab3a37e12ec7b8a73bd4e3f08a6635b56897f6a52e50a66b843e800c9075/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62b4ab3a37e12ec7b8a73bd4e3f08a6635b56897f6a52e50a66b843e800c9075/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62b4ab3a37e12ec7b8a73bd4e3f08a6635b56897f6a52e50a66b843e800c9075/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220801174348-13911",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220801174348-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220801174348-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220801174348-13911",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220801174348-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "640df6005f39f95043d429d59991a0ec13ed3cb6a8bc9511e25e9fbe49a63647",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52050"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52048"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52049"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/640df6005f39",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220801174348-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e9ca8d08aada",
	                        "default-k8s-different-port-20220801174348-13911"
	                    ],
	                    "NetworkID": "93e4fac921bdf274c24ca84fb85972d1783a1db2a54eb681a049895a93516443",
	                    "EndpointID": "e398daf01fb716d2ababbbc27826f8eaa35ea274bfbc2ca263f407b5e32abe87",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220801174348-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220801174348-13911 logs -n 25: (2.877762513s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220801172716-13911            | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220801173625-13911      | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | disable-driver-mounts-20220801173625-13911        |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:42 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:45:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:45:07.234304   31913 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:45:07.234495   31913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:45:07.234500   31913 out.go:309] Setting ErrFile to fd 2...
	I0801 17:45:07.234506   31913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:45:07.234609   31913 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:45:07.235111   31913 out.go:303] Setting JSON to false
	I0801 17:45:07.250217   31913 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9878,"bootTime":1659391229,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:45:07.250344   31913 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:45:07.272605   31913 out.go:177] * [default-k8s-different-port-20220801174348-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:45:07.294231   31913 notify.go:193] Checking for updates...
	I0801 17:45:07.316180   31913 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:45:07.337992   31913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:45:07.359246   31913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:45:07.380136   31913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:45:07.401417   31913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:45:07.423835   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:45:07.424579   31913 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:45:07.493796   31913 docker.go:137] docker version: linux-20.10.17
	I0801 17:45:07.493922   31913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:45:07.627933   31913 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:45:07.572823528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:45:07.671605   31913 out.go:177] * Using the docker driver based on existing profile
	I0801 17:45:07.693541   31913 start.go:284] selected driver: docker
	I0801 17:45:07.693586   31913 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port
-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:07.693713   31913 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:45:07.697078   31913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:45:07.829506   31913 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:45:07.755017741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:45:07.829656   31913 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:45:07.829672   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:07.829681   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:07.829693   31913 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:07.873313   31913 out.go:177] * Starting control plane node default-k8s-different-port-20220801174348-13911 in cluster default-k8s-different-port-20220801174348-13911
	I0801 17:45:07.894272   31913 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:45:07.916207   31913 out.go:177] * Pulling base image ...
	I0801 17:45:07.958286   31913 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:45:07.958311   31913 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:45:07.958367   31913 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 17:45:07.958398   31913 cache.go:57] Caching tarball of preloaded images
	I0801 17:45:07.958586   31913 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:45:07.958621   31913 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0801 17:45:07.959565   31913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/config.json ...
	I0801 17:45:08.023522   31913 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:45:08.023554   31913 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:45:08.023592   31913 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:45:08.023643   31913 start.go:371] acquiring machines lock for default-k8s-different-port-20220801174348-13911: {Name:mkf36bcbf3258128efc6b862fc1634fd58cb6b31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:45:08.023718   31913 start.go:375] acquired machines lock for "default-k8s-different-port-20220801174348-13911" in 52.949µs
	I0801 17:45:08.023737   31913 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:45:08.023747   31913 fix.go:55] fixHost starting: 
	I0801 17:45:08.023973   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:45:08.091536   31913 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220801174348-13911: state=Stopped err=<nil>
	W0801 17:45:08.091569   31913 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:45:08.135438   31913 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220801174348-13911" ...
	I0801 17:45:08.157366   31913 cli_runner.go:164] Run: docker start default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.512780   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:45:08.585392   31913 kic.go:415] container "default-k8s-different-port-20220801174348-13911" state is running.
	I0801 17:45:08.586032   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.658898   31913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/config.json ...
	I0801 17:45:08.659358   31913 machine.go:88] provisioning docker machine ...
	I0801 17:45:08.659384   31913 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220801174348-13911"
	I0801 17:45:08.659447   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.732726   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:08.732938   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:08.732958   31913 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220801174348-13911 && echo "default-k8s-different-port-20220801174348-13911" | sudo tee /etc/hostname
	I0801 17:45:08.854995   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220801174348-13911
	
	I0801 17:45:08.855093   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.929782   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:08.929919   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:08.929937   31913 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220801174348-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220801174348-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220801174348-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:45:09.043526   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:45:09.043548   31913 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:45:09.043582   31913 ubuntu.go:177] setting up certificates
	I0801 17:45:09.043592   31913 provision.go:83] configureAuth start
	I0801 17:45:09.043662   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.122482   31913 provision.go:138] copyHostCerts
	I0801 17:45:09.122564   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:45:09.122573   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:45:09.122680   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:45:09.122870   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:45:09.122879   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:45:09.122942   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:45:09.123074   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:45:09.123082   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:45:09.123138   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:45:09.123253   31913 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220801174348-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220801174348-13911]
	I0801 17:45:09.314883   31913 provision.go:172] copyRemoteCerts
	I0801 17:45:09.314960   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:45:09.315013   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.387026   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:09.473014   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:45:09.489683   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0801 17:45:09.506040   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:45:09.522131   31913 provision.go:86] duration metric: configureAuth took 478.520974ms
	I0801 17:45:09.522145   31913 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:45:09.522300   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:45:09.522359   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.593649   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.593822   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.593832   31913 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:45:09.706218   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:45:09.706232   31913 ubuntu.go:71] root file system type: overlay
	I0801 17:45:09.706373   31913 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:45:09.706456   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.777069   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.777323   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.777370   31913 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:45:09.897154   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:45:09.897240   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.968155   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.968332   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.968348   31913 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:45:10.085677   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:45:10.085692   31913 machine.go:91] provisioned docker machine in 1.42630245s
	I0801 17:45:10.085699   31913 start.go:307] post-start starting for "default-k8s-different-port-20220801174348-13911" (driver="docker")
	I0801 17:45:10.085706   31913 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:45:10.085791   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:45:10.085838   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.157469   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.242875   31913 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:45:10.246401   31913 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:45:10.246414   31913 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:45:10.246421   31913 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:45:10.246425   31913 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:45:10.246432   31913 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:45:10.246536   31913 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:45:10.246673   31913 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:45:10.246820   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:45:10.253819   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:45:10.270599   31913 start.go:310] post-start completed in 184.88853ms
	I0801 17:45:10.270679   31913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:45:10.270725   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.341693   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.424621   31913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:45:10.428724   31913 fix.go:57] fixHost completed within 2.40494101s
	I0801 17:45:10.428734   31913 start.go:82] releasing machines lock for "default-k8s-different-port-20220801174348-13911", held for 2.404972203s
	I0801 17:45:10.428805   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.499445   31913 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:45:10.499453   31913 ssh_runner.go:195] Run: systemctl --version
	I0801 17:45:10.499510   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.499521   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.577297   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.580177   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.863075   31913 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:45:10.872943   31913 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:45:10.873004   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:45:10.884327   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:45:10.896972   31913 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:45:10.964365   31913 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:45:11.037267   31913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:45:11.105843   31913 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:45:11.334865   31913 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:45:11.408996   31913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:45:11.478637   31913 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:45:11.489256   31913 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:45:11.489322   31913 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:45:11.493283   31913 start.go:471] Will wait 60s for crictl version
	I0801 17:45:11.493327   31913 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:45:11.594433   31913 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:45:11.594501   31913 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:45:11.628725   31913 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:45:11.707948   31913 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:45:11.708167   31913 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220801174348-13911 dig +short host.docker.internal
	I0801 17:45:11.835685   31913 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:45:11.835785   31913 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:45:11.839982   31913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:45:11.849128   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:11.920457   31913 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:45:11.920518   31913 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:45:11.950489   31913 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:45:11.950505   31913 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:45:11.950592   31913 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:45:11.979888   31913 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:45:11.979908   31913 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:45:11.979982   31913 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:45:12.056792   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:12.056805   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:12.056818   31913 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:45:12.056833   31913 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220801174348-13911 NodeName:default-k8s-different-port-20220801174348-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:45:12.056925   31913 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220801174348-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:45:12.057065   31913 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220801174348-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0801 17:45:12.057131   31913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:45:12.066061   31913 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:45:12.066148   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:45:12.073618   31913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0801 17:45:12.087045   31913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:45:12.099457   31913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0801 17:45:12.112836   31913 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:45:12.116178   31913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:45:12.125809   31913 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911 for IP: 192.168.67.2
	I0801 17:45:12.125918   31913 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:45:12.125966   31913 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:45:12.126040   31913 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.key
	I0801 17:45:12.126618   31913 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.key.c7fa3a9e
	I0801 17:45:12.126780   31913 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.key
	I0801 17:45:12.127193   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:45:12.127456   31913 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:45:12.127470   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:45:12.127507   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:45:12.127537   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:45:12.127568   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:45:12.127653   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:45:12.128137   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:45:12.145970   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:45:12.162916   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:45:12.179232   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 17:45:12.195637   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:45:12.211954   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:45:12.228458   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:45:12.245049   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:45:12.273133   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:45:12.289297   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:45:12.305906   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:45:12.322249   31913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:45:12.334078   31913 ssh_runner.go:195] Run: openssl version
	I0801 17:45:12.338984   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:45:12.346569   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.350437   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.350479   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.355640   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:45:12.362358   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:45:12.369524   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.373094   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.373143   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.378297   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:45:12.385323   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:45:12.392716   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.396215   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.396253   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.401492   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:45:12.408598   31913 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-1391
1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:12.408692   31913 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:45:12.437555   31913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:45:12.445130   31913 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:45:12.445143   31913 kubeadm.go:626] restartCluster start
	I0801 17:45:12.445184   31913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:45:12.451625   31913 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.451684   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:12.522544   31913 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220801174348-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:45:12.522712   31913 kubeconfig.go:127] "default-k8s-different-port-20220801174348-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:45:12.523108   31913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:45:12.524240   31913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:45:12.531709   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.531764   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.539797   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.740348   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.740540   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.750680   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.941944   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.942091   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.952401   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.141761   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.141933   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.152103   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.341140   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.341291   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.351393   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.541127   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.541267   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.550653   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.741989   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.742177   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.752445   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.939964   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.940062   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.949928   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.141998   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.142136   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.152691   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.340125   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.340267   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.350279   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.541428   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.541614   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.551563   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.741132   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.741260   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.751806   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.942014   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.942215   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.952554   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.140909   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.141047   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.151515   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.339961   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.340060   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.349894   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.539967   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.540029   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.548707   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.548716   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.548755   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.556495   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.556506   31913 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:45:15.556515   31913 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:45:15.556573   31913 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:45:15.588109   31913 docker.go:443] Stopping containers: [5330cf5dab78 804bfd7a4dd6 b753d3511dd1 ec2aabab3838 a079991f7e29 56f67accc23d f4047c9cc1b3 ae0ff377c871 d505ae905c0f 76bf3aba28e0 f366c63a7d21 8f26f8c13f7f 0da89e56674b f94f6bde6263 64851a902487 66e806932a2b]
	I0801 17:45:15.588183   31913 ssh_runner.go:195] Run: docker stop 5330cf5dab78 804bfd7a4dd6 b753d3511dd1 ec2aabab3838 a079991f7e29 56f67accc23d f4047c9cc1b3 ae0ff377c871 d505ae905c0f 76bf3aba28e0 f366c63a7d21 8f26f8c13f7f 0da89e56674b f94f6bde6263 64851a902487 66e806932a2b
	I0801 17:45:15.617424   31913 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:45:15.627354   31913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:45:15.634554   31913 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Aug  2 00:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:44 /etc/kubernetes/scheduler.conf
	
	I0801 17:45:15.634603   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0801 17:45:15.641371   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0801 17:45:15.648041   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0801 17:45:15.654727   31913 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.654766   31913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:45:15.661325   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0801 17:45:15.668099   31913 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.668152   31913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:45:15.674654   31913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:45:15.681717   31913 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:45:15.681728   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:15.726589   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.500734   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.684111   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.732262   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.805126   31913 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:45:16.805184   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:17.316069   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:17.815974   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:18.316161   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:18.326717   31913 api_server.go:71] duration metric: took 1.521564045s to wait for apiserver process to appear ...
	I0801 17:45:18.326733   31913 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:45:18.326742   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:21.129223   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:45:21.129239   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:45:21.631396   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:21.638935   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:45:21.638953   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:45:22.130245   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:22.135894   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:45:22.135911   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:45:22.629735   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:22.636164   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 200:
	ok
	I0801 17:45:22.643785   31913 api_server.go:140] control plane version: v1.24.3
	I0801 17:45:22.643800   31913 api_server.go:130] duration metric: took 4.31699607s to wait for apiserver health ...
	I0801 17:45:22.643806   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:22.643812   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:22.643822   31913 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:45:22.651516   31913 system_pods.go:59] 8 kube-system pods found
	I0801 17:45:22.651532   31913 system_pods.go:61] "coredns-6d4b75cb6d-5s86p" [e4978024-d992-4fd7-bec6-1d4cb093c4c8] Running
	I0801 17:45:22.651536   31913 system_pods.go:61] "etcd-default-k8s-different-port-20220801174348-13911" [c440b48e-48d8-4933-870b-c73df0860f90] Running
	I0801 17:45:22.651540   31913 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [e4032a9b-61fb-4493-b20a-e5d8f00382a1] Running
	I0801 17:45:22.651544   31913 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [39dbe98f-51c3-43d0-bca0-2ca31da431b5] Running
	I0801 17:45:22.651554   31913 system_pods.go:61] "kube-proxy-f7zxq" [f0307046-df65-4bb4-8bce-ddf9847f3c8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:45:22.651561   31913 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [8d33bc48-5ef3-41d2-8a6c-3fc70a048090] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:45:22.651568   31913 system_pods.go:61] "metrics-server-5c6f97fb75-647p7" [c842a29c-ef57-4fdd-be7a-43b9aa1f5178] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:45:22.651574   31913 system_pods.go:61] "storage-provisioner" [1b0a55a5-6df4-4f1c-a915-748eedde2dcd] Running
	I0801 17:45:22.651577   31913 system_pods.go:74] duration metric: took 7.750651ms to wait for pod list to return data ...
	I0801 17:45:22.651584   31913 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:45:22.654773   31913 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:45:22.654788   31913 node_conditions.go:123] node cpu capacity is 6
	I0801 17:45:22.654797   31913 node_conditions.go:105] duration metric: took 3.209718ms to run NodePressure ...
	I0801 17:45:22.654815   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:22.779173   31913 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0801 17:45:22.783016   31913 kubeadm.go:777] kubelet initialised
	I0801 17:45:22.783028   31913 kubeadm.go:778] duration metric: took 3.840293ms waiting for restarted kubelet to initialise ...
	I0801 17:45:22.783039   31913 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:45:22.798314   31913 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.803583   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.803592   31913 pod_ready.go:81] duration metric: took 5.265827ms waiting for pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.803598   31913 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.807690   31913 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.807699   31913 pod_ready.go:81] duration metric: took 4.096609ms waiting for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.807705   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.812128   31913 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.812139   31913 pod_ready.go:81] duration metric: took 4.429356ms waiting for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.812147   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:23.049650   31913 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:23.049663   31913 pod_ready.go:81] duration metric: took 237.506184ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:23.049674   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f7zxq" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:25.452177   31913 pod_ready.go:102] pod "kube-proxy-f7zxq" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:25.956303   31913 pod_ready.go:92] pod "kube-proxy-f7zxq" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:25.956316   31913 pod_ready.go:81] duration metric: took 2.90659156s waiting for pod "kube-proxy-f7zxq" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:25.956321   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:27.967784   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:29.967951   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:32.469695   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:34.967596   31913 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:34.967609   31913 pod_ready.go:81] duration metric: took 9.011143978s waiting for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:34.967617   31913 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:36.980491   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:39.477994   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:41.479741   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:43.978790   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:45.979374   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:48.479882   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:50.978835   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:53.480508   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:55.980599   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:58.477821   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:00.480184   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:02.978673   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:04.979236   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:06.980442   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:09.481308   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:11.978320   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:13.981763   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:16.478127   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:18.480044   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:20.979016   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:22.979653   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:25.478419   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:27.479556   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:29.480115   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:31.980465   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:33.980626   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:36.480658   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:38.979006   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:40.981768   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:43.480457   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:45.980250   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:48.481530   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:50.978561   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:52.979664   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:54.980231   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:56.980842   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:58.982943   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:01.479828   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:03.482672   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:05.979086   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:07.982332   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:10.479005   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:12.480025   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:14.482235   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:16.979326   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:18.980589   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:21.482584   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:23.979972   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:25.983075   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:28.479805   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:30.482624   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:32.979818   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:34.980778   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:37.479655   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:39.480136   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:41.980390   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:44.483064   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:46.980525   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:48.982541   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:51.480766   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:53.982272   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:55.982766   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:58.481978   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:00.983225   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:03.481420   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:05.483218   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:07.981513   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:09.983965   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:12.482580   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:14.981866   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:16.983535   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:19.480935   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:21.483477   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:23.980733   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:25.981338   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:27.982632   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:30.482321   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:32.981657   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:34.982204   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:37.479819   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:39.482183   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:41.483395   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:43.984485   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:46.483247   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:48.484593   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:50.981872   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:52.983510   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:54.984416   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:57.482475   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:59.982066   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:01.983433   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:04.481808   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:06.483918   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:08.484566   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:10.981629   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:12.983255   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:14.983483   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:17.482018   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:19.483424   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:21.984065   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:24.482544   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:26.984666   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:29.483581   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:31.984749   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:34.484939   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:34.976048   31913 pod_ready.go:81] duration metric: took 4m0.004717233s waiting for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" ...
	E0801 17:49:34.976075   31913 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0801 17:49:34.976092   31913 pod_ready.go:38] duration metric: took 4m12.189153798s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:49:34.976210   31913 kubeadm.go:630] restartCluster took 4m22.52701004s
	W0801 17:49:34.976332   31913 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0801 17:49:34.976363   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0801 17:49:37.337570   31913 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.361154161s)
	I0801 17:49:37.337631   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:49:37.348151   31913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:49:37.356017   31913 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:49:37.356067   31913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:49:37.363491   31913 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:49:37.363525   31913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:49:37.647145   31913 out.go:204]   - Generating certificates and keys ...
	I0801 17:49:38.415463   31913 out.go:204]   - Booting up control plane ...
	I0801 17:49:44.964434   31913 out.go:204]   - Configuring RBAC rules ...
	I0801 17:49:45.340117   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:49:45.340131   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:49:45.340148   31913 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:49:45.340246   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:45.340253   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=default-k8s-different-port-20220801174348-13911 minikube.k8s.io/updated_at=2022_08_01T17_49_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:45.475719   31913 ops.go:34] apiserver oom_adj: -16
	I0801 17:49:45.475734   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:46.055191   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:46.555085   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:47.055196   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:47.555535   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:48.055243   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:48.555376   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:49.057149   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:49.556580   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:50.055221   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:50.555044   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:51.057215   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:51.555146   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:52.055363   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:52.556980   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:53.055045   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:53.555028   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:54.055942   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:54.555141   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:55.056685   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:55.555659   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:56.055753   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:56.557278   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:57.055447   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:57.555591   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.055688   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.555182   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.617518   31913 kubeadm.go:1045] duration metric: took 13.277140503s to wait for elevateKubeSystemPrivileges.
	I0801 17:49:58.617535   31913 kubeadm.go:397] StartCluster complete in 4m46.204525782s
	I0801 17:49:58.617551   31913 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:49:58.617629   31913 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:49:58.618157   31913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:49:59.134508   31913 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220801174348-13911" rescaled to 1
	I0801 17:49:59.134544   31913 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:49:59.134572   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:49:59.134599   31913 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:49:59.134722   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:49:59.173487   31913 out.go:177] * Verifying Kubernetes components...
	I0801 17:49:59.173610   31913 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.173622   31913 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247530   31913 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247534   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0801 17:49:59.247541   31913 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:49:59.173621   31913 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247569   31913 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247592   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247570   31913 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.247619   31913 addons.go:162] addon metrics-server should already be in state true
	I0801 17:49:59.226194   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 17:49:59.173631   31913 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247669   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247687   31913 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.247701   31913 addons.go:162] addon dashboard should already be in state true
	I0801 17:49:59.247734   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247986   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.248049   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.248203   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.249076   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.384922   31913 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:49:59.406550   31913 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0801 17:49:59.443540   31913 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 17:49:59.480448   31913 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:49:59.501569   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:49:59.501592   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:49:59.501594   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:49:59.501769   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.539459   31913 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:49:59.501851   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.502354   31913 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.576566   31913 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:49:59.576647   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:49:59.576660   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:49:59.576678   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.576764   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.580094   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.625680   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.681280   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.688282   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.691624   31913 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:49:59.691636   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:49:59.691686   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.777185   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.831897   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:49:59.831911   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:49:59.910096   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:49:59.910110   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:49:59.919841   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:49:59.919858   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:49:59.921501   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:49:59.933746   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:49:59.933762   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:50:00.011707   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:50:00.011724   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:50:00.031335   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:50:00.033068   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:50:00.036467   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:50:00.036480   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:50:00.116457   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:50:00.116470   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:50:00.214471   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:50:00.214492   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:50:00.326401   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:50:00.326442   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:50:00.401494   31913 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.153808764s)
	I0801 17:50:00.401493   31913 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.15390213s)
	I0801 17:50:00.401524   31913 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0801 17:50:00.401623   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:50:00.418882   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:50:00.418903   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:50:00.481810   31913 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220801174348-13911" to be "Ready" ...
	I0801 17:50:00.505662   31913 node_ready.go:49] node "default-k8s-different-port-20220801174348-13911" has status "Ready":"True"
	I0801 17:50:00.505675   31913 node_ready.go:38] duration metric: took 23.848502ms waiting for node "default-k8s-different-port-20220801174348-13911" to be "Ready" ...
	I0801 17:50:00.505683   31913 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:50:00.512490   31913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:00.540422   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:50:00.540439   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:50:00.612866   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:50:00.612881   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:50:00.637798   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:50:00.747371   31913 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220801174348-13911"
	I0801 17:50:01.390479   31913 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:50:01.432449   31913 addons.go:414] enableAddons completed in 2.297828518s
	I0801 17:50:02.527846   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:02.527860   31913 pod_ready.go:81] duration metric: took 2.01531768s waiting for pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:02.527869   31913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.540729   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.540741   31913 pod_ready.go:81] duration metric: took 2.012836849s waiting for pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.540747   31913 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.545243   31913 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.545251   31913 pod_ready.go:81] duration metric: took 4.4993ms waiting for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.545258   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.548996   31913 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.549004   31913 pod_ready.go:81] duration metric: took 3.736506ms waiting for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.549010   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.552768   31913 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.552776   31913 pod_ready.go:81] duration metric: took 3.76149ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.552782   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvn56" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.556657   31913 pod_ready.go:92] pod "kube-proxy-dvn56" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.556665   31913 pod_ready.go:81] duration metric: took 3.869516ms waiting for pod "kube-proxy-dvn56" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.556670   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.940897   31913 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.940907   31913 pod_ready.go:81] duration metric: took 384.226091ms waiting for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.940914   31913 pod_ready.go:38] duration metric: took 4.435152434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:50:04.940932   31913 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:50:04.940979   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:50:04.951301   31913 api_server.go:71] duration metric: took 5.816647694s to wait for apiserver process to appear ...
	I0801 17:50:04.951313   31913 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:50:04.951319   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:50:04.956817   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 200:
	ok
	I0801 17:50:04.958134   31913 api_server.go:140] control plane version: v1.24.3
	I0801 17:50:04.958144   31913 api_server.go:130] duration metric: took 6.826071ms to wait for apiserver health ...
	I0801 17:50:04.958149   31913 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:50:05.140334   31913 system_pods.go:59] 9 kube-system pods found
	I0801 17:50:05.140349   31913 system_pods.go:61] "coredns-6d4b75cb6d-cvnql" [9614734b-2bd7-4bbf-97b5-634cb4468393] Running
	I0801 17:50:05.140353   31913 system_pods.go:61] "coredns-6d4b75cb6d-z8jfq" [860c344e-4653-4582-ab6e-19ef7308526f] Running
	I0801 17:50:05.140357   31913 system_pods.go:61] "etcd-default-k8s-different-port-20220801174348-13911" [441c7722-6d7f-4385-b0b8-649b3f4ce6f2] Running
	I0801 17:50:05.140360   31913 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [97cf9337-b5ff-477d-b398-366aee9386c6] Running
	I0801 17:50:05.140364   31913 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [a457c03f-b47f-41b4-98f9-c117f334574f] Running
	I0801 17:50:05.140368   31913 system_pods.go:61] "kube-proxy-dvn56" [c67e035f-7889-4442-a7af-6972b0937045] Running
	I0801 17:50:05.140373   31913 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [c3505894-023d-4f91-baaa-6328dac164b8] Running
	I0801 17:50:05.140378   31913 system_pods.go:61] "metrics-server-5c6f97fb75-wzfjd" [43803567-1715-4fb4-9020-c9ac939c5e55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:50:05.140383   31913 system_pods.go:61] "storage-provisioner" [1e484f79-248b-4da1-a6d5-eef631825f86] Running
	I0801 17:50:05.140387   31913 system_pods.go:74] duration metric: took 182.231588ms to wait for pod list to return data ...
	I0801 17:50:05.140392   31913 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:50:05.338528   31913 default_sa.go:45] found service account: "default"
	I0801 17:50:05.338539   31913 default_sa.go:55] duration metric: took 198.14019ms for default service account to be created ...
	I0801 17:50:05.338544   31913 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 17:50:05.542082   31913 system_pods.go:86] 9 kube-system pods found
	I0801 17:50:05.542095   31913 system_pods.go:89] "coredns-6d4b75cb6d-cvnql" [9614734b-2bd7-4bbf-97b5-634cb4468393] Running
	I0801 17:50:05.542100   31913 system_pods.go:89] "coredns-6d4b75cb6d-z8jfq" [860c344e-4653-4582-ab6e-19ef7308526f] Running
	I0801 17:50:05.542103   31913 system_pods.go:89] "etcd-default-k8s-different-port-20220801174348-13911" [441c7722-6d7f-4385-b0b8-649b3f4ce6f2] Running
	I0801 17:50:05.542107   31913 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [97cf9337-b5ff-477d-b398-366aee9386c6] Running
	I0801 17:50:05.542111   31913 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [a457c03f-b47f-41b4-98f9-c117f334574f] Running
	I0801 17:50:05.542115   31913 system_pods.go:89] "kube-proxy-dvn56" [c67e035f-7889-4442-a7af-6972b0937045] Running
	I0801 17:50:05.542131   31913 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [c3505894-023d-4f91-baaa-6328dac164b8] Running
	I0801 17:50:05.542140   31913 system_pods.go:89] "metrics-server-5c6f97fb75-wzfjd" [43803567-1715-4fb4-9020-c9ac939c5e55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:50:05.542145   31913 system_pods.go:89] "storage-provisioner" [1e484f79-248b-4da1-a6d5-eef631825f86] Running
	I0801 17:50:05.542149   31913 system_pods.go:126] duration metric: took 203.598883ms to wait for k8s-apps to be running ...
	I0801 17:50:05.542158   31913 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 17:50:05.542206   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:50:05.551638   31913 system_svc.go:56] duration metric: took 9.480244ms WaitForService to wait for kubelet.
	I0801 17:50:05.551649   31913 kubeadm.go:572] duration metric: took 6.41698891s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 17:50:05.551663   31913 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:50:05.736899   31913 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:50:05.736912   31913 node_conditions.go:123] node cpu capacity is 6
	I0801 17:50:05.736919   31913 node_conditions.go:105] duration metric: took 185.250207ms to run NodePressure ...
	I0801 17:50:05.736928   31913 start.go:216] waiting for startup goroutines ...
	I0801 17:50:05.767446   31913 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:50:05.791650   31913 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220801174348-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:45:08 UTC, end at Tue 2022-08-02 00:51:18 UTC. --
	Aug 02 00:49:36 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:49:36.645250093Z" level=info msg="ignoring event" container=dbba531bf727df30cf9dbf3d94bde537a7a25abfe45fefc8b4fb3446c22de807 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:49:36 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:49:36.714073814Z" level=info msg="ignoring event" container=9638ea57d585542eb5f98c073ea509cccbe9daf46c47ee9323b75d7978f684b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:49:36 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:49:36.830303183Z" level=info msg="ignoring event" container=9adb4a9dacce84099bb5bc9fa016712ad87a98d9aceccebb852401dc4d88d908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:49:36 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:49:36.898074718Z" level=info msg="ignoring event" container=9cdd4c2e8e843b67bc765d3efcc19a163b88e949311a6cdbdde8ecd2b2b53acc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:49:36 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:49:36.986723879Z" level=info msg="ignoring event" container=51dcc87bf156069ed5b022267ec851df4cf21ffce110252251530c614fca5211 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:01 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:01.815489273Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:01 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:01.815564570Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:01 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:01.816593777Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:02 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:02.775203273Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Aug 02 00:50:05 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:05.758687115Z" level=info msg="ignoring event" container=2e61157a30011a3009a6eef9923faeb2a202b4d3e06188a155030ed990235169 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:05 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:05.807266611Z" level=info msg="ignoring event" container=49a0d13a8074ab0ce6f0943a086fd3ad302f60f82a71c14523b86d3bd7ea0dee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:07 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:07.971499991Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3\": Get \"https://auth.docker.io/token?scope=repository%!A(MISSING)kubernetesui%!F(MISSING)dashboard%!A(MISSING)pull&service=registry.docker.io\": EOF"
	Aug 02 00:50:07 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:07.972836759Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3\": Get \"https://auth.docker.io/token?scope=repository%!A(MISSING)kubernetesui%!F(MISSING)dashboard%!A(MISSING)pull&service=registry.docker.io\": EOF"
	Aug 02 00:50:08 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:08.618261274Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:50:08 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:08.914613846Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:50:12 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:12.129967689Z" level=info msg="ignoring event" container=d60f488f17fe1065b06858a1c6016e8439232b28c5fb8330de3430cf9c0816e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:13 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:13.142482770Z" level=info msg="ignoring event" container=2238d514a7cff6225d6963caefab0e8d5062112646258c5d0f4f8339cc02108c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:14 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:14.329383120Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:14 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:14.329768932Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:14 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:14.330982199Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:24 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:24.586937218Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Aug 02 00:50:36 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:36.423189840Z" level=info msg="ignoring event" container=aab57476fd3b066d1b9d15fa2972041db92bab6024b1563609cebb49ac733d07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:42 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:42.305008080Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:42 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:42.305066077Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:42 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:42.378976092Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	aab57476fd3b0       a90209bb39e3d                                                                                    42 seconds ago       Exited              dashboard-metrics-scraper   2                   eab0368196dfb
	9f56fd824bc5f       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   49 seconds ago       Running             kubernetes-dashboard        0                   fb87cf0fcc1f7
	b279da282f80f       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   ed69aa184eb16
	027d6efe72a6a       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   209dc38a5e767
	e8bc638faf651       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   06a4f994ca171
	ebef0cd649b37       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   ea4e7e98257fb
	e40d954d1f100       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   1a8045f98b3b1
	1daa6699c3713       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   3c47cda00da87
	37b405b118d92       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   dfa1fa748d1c3
	
	* 
	* ==> coredns [027d6efe72a6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220801174348-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220801174348-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=default-k8s-different-port-20220801174348-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_49_45_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:49:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220801174348-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:51:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:51:16 +0000   Tue, 02 Aug 2022 00:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:51:16 +0000   Tue, 02 Aug 2022 00:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:51:16 +0000   Tue, 02 Aug 2022 00:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Aug 2022 00:51:16 +0000   Tue, 02 Aug 2022 00:49:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20220801174348-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                1bcc3bc9-f0da-4ff3-aea8-f9de709d8302
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-z8jfq                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     81s
	  kube-system                 etcd-default-k8s-different-port-20220801174348-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220801174348-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220801174348-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-dvn56                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220801174348-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 metrics-server-5c6f97fb75-wzfjd                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         79s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-8jj4s                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-49lj4                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 80s   kube-proxy       
	  Normal  Starting                 94s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  94s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s   kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s   kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s   kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasSufficientPID
	  Normal  NodeReady                94s   kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeReady
	  Normal  RegisteredNode           82s   node-controller  Node default-k8s-different-port-20220801174348-13911 event: Registered Node default-k8s-different-port-20220801174348-13911 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [ebef0cd649b3] <==
	* {"level":"info","ts":"2022-08-02T00:49:39.825Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-08-02T00:49:39.826Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:49:39.827Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:49:39.827Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:49:39.827Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:49:39.827Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20220801174348-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:49:40.268Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:49:40.268Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:49:40.268Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:49:40.269Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:50:07.436Z","caller":"traceutil/trace.go:171","msg":"trace[1333753144] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"101.966213ms","start":"2022-08-02T00:50:07.334Z","end":"2022-08-02T00:50:07.435Z","steps":["trace[1333753144] 'process raft request'  (duration: 26.42704ms)","trace[1333753144] 'compare'  (duration: 75.069696ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  00:51:19 up  1:16,  0 users,  load average: 0.72, 0.68, 0.80
	Linux default-k8s-different-port-20220801174348-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [37b405b118d9] <==
	* I0802 00:49:44.636191       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:49:45.164715       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:49:45.170551       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0802 00:49:45.178286       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:49:45.266978       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:49:58.141389       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0802 00:49:58.190667       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0802 00:49:58.734169       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:50:00.752564       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.101.55.90]
	I0802 00:50:01.365674       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.106.198.174]
	I0802 00:50:01.374405       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.104.203.13]
	W0802 00:50:01.651309       1 handler_proxy.go:102] no RequestInfo found in the context
	W0802 00:50:01.651334       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:50:01.651350       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:50:01.651355       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0802 00:50:01.651357       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:50:01.652615       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:51:15.969039       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:51:15.969055       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:51:15.969060       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:51:15.969709       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:51:15.969730       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:51:15.970064       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1daa6699c371] <==
	* I0802 00:49:58.649022       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0802 00:49:58.655289       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-cvnql"
	I0802 00:50:00.635755       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0802 00:50:00.639551       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0802 00:50:00.643873       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0802 00:50:00.650832       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-wzfjd"
	I0802 00:50:01.277624       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0802 00:50:01.282749       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:50:01.287293       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0802 00:50:01.287662       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:50:01.292720       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:50:01.293052       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:50:01.293095       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:50:01.294970       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:50:01.295061       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:50:01.297074       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:50:01.299645       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:50:01.299719       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:50:01.319643       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-49lj4"
	I0802 00:50:01.320138       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-8jj4s"
	W0802 00:50:06.995085       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0802 00:50:27.606462       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0802 00:50:28.104257       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0802 00:51:16.086708       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0802 00:51:16.156895       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e8bc638faf65] <==
	* I0802 00:49:58.704174       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:49:58.704231       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:49:58.704269       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:49:58.730963       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:49:58.731000       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:49:58.731008       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:49:58.731017       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:49:58.731035       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:49:58.731181       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:49:58.731417       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:49:58.731445       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:49:58.731854       1 config.go:317] "Starting service config controller"
	I0802 00:49:58.731899       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:49:58.731915       1 config.go:444] "Starting node config controller"
	I0802 00:49:58.731918       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:49:58.732287       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:49:58.732316       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:49:58.832334       1 shared_informer.go:262] Caches are synced for node config
	I0802 00:49:58.832399       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:49:58.832414       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [e40d954d1f10] <==
	* E0802 00:49:42.547353       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 00:49:42.545791       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:49:42.547360       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:49:42.547236       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 00:49:42.547586       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:49:42.547616       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:49:43.369159       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 00:49:43.369195       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 00:49:43.369233       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 00:49:43.369240       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 00:49:43.395309       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 00:49:43.395344       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 00:49:43.473103       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 00:49:43.473157       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 00:49:43.555960       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 00:49:43.555978       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 00:49:43.622150       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 00:49:43.622239       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 00:49:43.669316       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 00:49:43.669334       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 00:49:43.673244       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 00:49:43.673352       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 00:49:43.698477       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:49:43.698632       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0802 00:49:46.940826       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:45:08 UTC, end at Tue 2022-08-02 00:51:20 UTC. --
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590666    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2g7w\" (UniqueName: \"kubernetes.io/projected/43803567-1715-4fb4-9020-c9ac939c5e55-kube-api-access-d2g7w\") pod \"metrics-server-5c6f97fb75-wzfjd\" (UID: \"43803567-1715-4fb4-9020-c9ac939c5e55\") " pod="kube-system/metrics-server-5c6f97fb75-wzfjd"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590721    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c67e035f-7889-4442-a7af-6972b0937045-xtables-lock\") pod \"kube-proxy-dvn56\" (UID: \"c67e035f-7889-4442-a7af-6972b0937045\") " pod="kube-system/kube-proxy-dvn56"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590742    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/860c344e-4653-4582-ab6e-19ef7308526f-config-volume\") pod \"coredns-6d4b75cb6d-z8jfq\" (UID: \"860c344e-4653-4582-ab6e-19ef7308526f\") " pod="kube-system/coredns-6d4b75cb6d-z8jfq"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590761    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzmd\" (UniqueName: \"kubernetes.io/projected/1e484f79-248b-4da1-a6d5-eef631825f86-kube-api-access-dbzmd\") pod \"storage-provisioner\" (UID: \"1e484f79-248b-4da1-a6d5-eef631825f86\") " pod="kube-system/storage-provisioner"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590779    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e200b550-e12b-448d-a50e-7c3e4b390f31-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-8jj4s\" (UID: \"e200b550-e12b-448d-a50e-7c3e4b390f31\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-8jj4s"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590795    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8w4\" (UniqueName: \"kubernetes.io/projected/860c344e-4653-4582-ab6e-19ef7308526f-kube-api-access-pr8w4\") pod \"coredns-6d4b75cb6d-z8jfq\" (UID: \"860c344e-4653-4582-ab6e-19ef7308526f\") " pod="kube-system/coredns-6d4b75cb6d-z8jfq"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590811    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c67e035f-7889-4442-a7af-6972b0937045-lib-modules\") pod \"kube-proxy-dvn56\" (UID: \"c67e035f-7889-4442-a7af-6972b0937045\") " pod="kube-system/kube-proxy-dvn56"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590826    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/180a2c1b-6569-45b1-8704-8dd02927b1bd-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-49lj4\" (UID: \"180a2c1b-6569-45b1-8704-8dd02927b1bd\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-49lj4"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590840    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/43803567-1715-4fb4-9020-c9ac939c5e55-tmp-dir\") pod \"metrics-server-5c6f97fb75-wzfjd\" (UID: \"43803567-1715-4fb4-9020-c9ac939c5e55\") " pod="kube-system/metrics-server-5c6f97fb75-wzfjd"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590892    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp297\" (UniqueName: \"kubernetes.io/projected/c67e035f-7889-4442-a7af-6972b0937045-kube-api-access-wp297\") pod \"kube-proxy-dvn56\" (UID: \"c67e035f-7889-4442-a7af-6972b0937045\") " pod="kube-system/kube-proxy-dvn56"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590922    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sckf\" (UniqueName: \"kubernetes.io/projected/180a2c1b-6569-45b1-8704-8dd02927b1bd-kube-api-access-6sckf\") pod \"kubernetes-dashboard-5fd5574d9f-49lj4\" (UID: \"180a2c1b-6569-45b1-8704-8dd02927b1bd\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-49lj4"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590953    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e484f79-248b-4da1-a6d5-eef631825f86-tmp\") pod \"storage-provisioner\" (UID: \"1e484f79-248b-4da1-a6d5-eef631825f86\") " pod="kube-system/storage-provisioner"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590972    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c67e035f-7889-4442-a7af-6972b0937045-kube-proxy\") pod \"kube-proxy-dvn56\" (UID: \"c67e035f-7889-4442-a7af-6972b0937045\") " pod="kube-system/kube-proxy-dvn56"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.591033    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x44h\" (UniqueName: \"kubernetes.io/projected/e200b550-e12b-448d-a50e-7c3e4b390f31-kube-api-access-7x44h\") pod \"dashboard-metrics-scraper-dffd48c4c-8jj4s\" (UID: \"e200b550-e12b-448d-a50e-7c3e4b390f31\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-8jj4s"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.591135    9979 reconciler.go:157] "Reconciler: start to sync state"
	Aug 02 00:51:18 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:18.766490    9979 request.go:601] Waited for 1.166528284s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Aug 02 00:51:18 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:18.824633    9979 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220801174348-13911\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220801174348-13911"
	Aug 02 00:51:18 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:18.973870    9979 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220801174348-13911\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220801174348-13911"
	Aug 02 00:51:19 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:19.170869    9979 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220801174348-13911\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220801174348-13911"
	Aug 02 00:51:19 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:19.440950    9979 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220801174348-13911\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220801174348-13911"
	Aug 02 00:51:19 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:19.670511    9979 scope.go:110] "RemoveContainer" containerID="aab57476fd3b066d1b9d15fa2972041db92bab6024b1563609cebb49ac733d07"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.139345    9979 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.139413    9979 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.139583    9979 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d2g7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fil
e,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-wzfjd_kube-system(43803567-1715-4fb4-9020-c9ac939c5e55): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.139613    9979 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-wzfjd" podUID=43803567-1715-4fb4-9020-c9ac939c5e55
	
	* 
	* ==> kubernetes-dashboard [9f56fd824bc5] <==
	* 2022/08/02 00:50:29 Starting overwatch
	2022/08/02 00:50:29 Using namespace: kubernetes-dashboard
	2022/08/02 00:50:29 Using in-cluster config to connect to apiserver
	2022/08/02 00:50:29 Using secret token for csrf signing
	2022/08/02 00:50:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/08/02 00:50:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/08/02 00:50:29 Successful initial request to the apiserver, version: v1.24.3
	2022/08/02 00:50:29 Generating JWE encryption key
	2022/08/02 00:50:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/08/02 00:50:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/08/02 00:50:29 Initializing JWE encryption key from synchronized object
	2022/08/02 00:50:29 Creating in-cluster Sidecar client
	2022/08/02 00:50:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:50:29 Serving insecurely on HTTP port: 9090
	2022/08/02 00:51:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [b279da282f80] <==
	* I0802 00:50:01.655623       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 00:50:01.664007       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 00:50:01.664055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 00:50:01.669321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 00:50:01.669442       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220801174348-13911_ae20c346-a628-4ee6-869e-4b781f24b010!
	I0802 00:50:01.669749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4378a963-0a3f-49bb-9d97-2fb63b088c26", APIVersion:"v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220801174348-13911_ae20c346-a628-4ee6-869e-4b781f24b010 became leader
	I0802 00:50:01.769623       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220801174348-13911_ae20c346-a628-4ee6-869e-4b781f24b010!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220801174348-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-wzfjd
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220801174348-13911 describe pod metrics-server-5c6f97fb75-wzfjd
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220801174348-13911 describe pod metrics-server-5c6f97fb75-wzfjd: exit status 1 (272.527547ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-wzfjd" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220801174348-13911 describe pod metrics-server-5c6f97fb75-wzfjd: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220801174348-13911
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220801174348-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63",
	        "Created": "2022-08-02T00:43:55.412589795Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:45:08.503134427Z",
	            "FinishedAt": "2022-08-02T00:45:06.545740921Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63/hosts",
	        "LogPath": "/var/lib/docker/containers/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63/e9ca8d08aadae55d03ab2ca5b3ccea00792891cb32fbab067470184a612b1d63-json.log",
	        "Name": "/default-k8s-different-port-20220801174348-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220801174348-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220801174348-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62b4ab3a37e12ec7b8a73bd4e3f08a6635b56897f6a52e50a66b843e800c9075-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62b4ab3a37e12ec7b8a73bd4e3f08a6635b56897f6a52e50a66b843e800c9075/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62b4ab3a37e12ec7b8a73bd4e3f08a6635b56897f6a52e50a66b843e800c9075/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62b4ab3a37e12ec7b8a73bd4e3f08a6635b56897f6a52e50a66b843e800c9075/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220801174348-13911",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220801174348-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220801174348-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220801174348-13911",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220801174348-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "640df6005f39f95043d429d59991a0ec13ed3cb6a8bc9511e25e9fbe49a63647",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52050"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52048"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52049"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/640df6005f39",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220801174348-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e9ca8d08aada",
	                        "default-k8s-different-port-20220801174348-13911"
	                    ],
	                    "NetworkID": "93e4fac921bdf274c24ca84fb85972d1783a1db2a54eb681a049895a93516443",
	                    "EndpointID": "e398daf01fb716d2ababbbc27826f8eaa35ea274bfbc2ca263f407b5e32abe87",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220801174348-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220801174348-13911 logs -n 25: (2.749139638s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220801172716-13911            | jenkins | v1.26.0 | 01 Aug 22 17:33 PDT |                     |
	|         | old-k8s-version-20220801172716-13911              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:35 PDT | 01 Aug 22 17:35 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220801172918-13911                | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | embed-certs-20220801172918-13911                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220801173625-13911      | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:36 PDT |
	|         | disable-driver-mounts-20220801173625-13911        |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:36 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:42 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:45:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:45:07.234304   31913 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:45:07.234495   31913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:45:07.234500   31913 out.go:309] Setting ErrFile to fd 2...
	I0801 17:45:07.234506   31913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:45:07.234609   31913 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:45:07.235111   31913 out.go:303] Setting JSON to false
	I0801 17:45:07.250217   31913 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9878,"bootTime":1659391229,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:45:07.250344   31913 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:45:07.272605   31913 out.go:177] * [default-k8s-different-port-20220801174348-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:45:07.294231   31913 notify.go:193] Checking for updates...
	I0801 17:45:07.316180   31913 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:45:07.337992   31913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:45:07.359246   31913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:45:07.380136   31913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:45:07.401417   31913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:45:07.423835   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:45:07.424579   31913 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:45:07.493796   31913 docker.go:137] docker version: linux-20.10.17
	I0801 17:45:07.493922   31913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:45:07.627933   31913 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:45:07.572823528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:45:07.671605   31913 out.go:177] * Using the docker driver based on existing profile
	I0801 17:45:07.693541   31913 start.go:284] selected driver: docker
	I0801 17:45:07.693586   31913 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port
-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:07.693713   31913 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:45:07.697078   31913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:45:07.829506   31913 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:45:07.755017741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:45:07.829656   31913 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 17:45:07.829672   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:07.829681   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:07.829693   31913 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:07.873313   31913 out.go:177] * Starting control plane node default-k8s-different-port-20220801174348-13911 in cluster default-k8s-different-port-20220801174348-13911
	I0801 17:45:07.894272   31913 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:45:07.916207   31913 out.go:177] * Pulling base image ...
	I0801 17:45:07.958286   31913 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:45:07.958311   31913 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:45:07.958367   31913 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 17:45:07.958398   31913 cache.go:57] Caching tarball of preloaded images
	I0801 17:45:07.958586   31913 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:45:07.958621   31913 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0801 17:45:07.959565   31913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/config.json ...
	I0801 17:45:08.023522   31913 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:45:08.023554   31913 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:45:08.023592   31913 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:45:08.023643   31913 start.go:371] acquiring machines lock for default-k8s-different-port-20220801174348-13911: {Name:mkf36bcbf3258128efc6b862fc1634fd58cb6b31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:45:08.023718   31913 start.go:375] acquired machines lock for "default-k8s-different-port-20220801174348-13911" in 52.949µs
	I0801 17:45:08.023737   31913 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:45:08.023747   31913 fix.go:55] fixHost starting: 
	I0801 17:45:08.023973   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:45:08.091536   31913 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220801174348-13911: state=Stopped err=<nil>
	W0801 17:45:08.091569   31913 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:45:08.135438   31913 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220801174348-13911" ...
	I0801 17:45:08.157366   31913 cli_runner.go:164] Run: docker start default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.512780   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:45:08.585392   31913 kic.go:415] container "default-k8s-different-port-20220801174348-13911" state is running.
	I0801 17:45:08.586032   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.658898   31913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/config.json ...
	I0801 17:45:08.659358   31913 machine.go:88] provisioning docker machine ...
	I0801 17:45:08.659384   31913 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220801174348-13911"
	I0801 17:45:08.659447   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.732726   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:08.732938   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:08.732958   31913 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220801174348-13911 && echo "default-k8s-different-port-20220801174348-13911" | sudo tee /etc/hostname
	I0801 17:45:08.854995   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220801174348-13911
	
	I0801 17:45:08.855093   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:08.929782   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:08.929919   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:08.929937   31913 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220801174348-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220801174348-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220801174348-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:45:09.043526   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:45:09.043548   31913 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:45:09.043582   31913 ubuntu.go:177] setting up certificates
	I0801 17:45:09.043592   31913 provision.go:83] configureAuth start
	I0801 17:45:09.043662   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.122482   31913 provision.go:138] copyHostCerts
	I0801 17:45:09.122564   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:45:09.122573   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:45:09.122680   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:45:09.122870   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:45:09.122879   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:45:09.122942   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:45:09.123074   31913 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:45:09.123082   31913 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:45:09.123138   31913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:45:09.123253   31913 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220801174348-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220801174348-13911]
	I0801 17:45:09.314883   31913 provision.go:172] copyRemoteCerts
	I0801 17:45:09.314960   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:45:09.315013   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.387026   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:09.473014   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:45:09.489683   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0801 17:45:09.506040   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 17:45:09.522131   31913 provision.go:86] duration metric: configureAuth took 478.520974ms
	I0801 17:45:09.522145   31913 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:45:09.522300   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:45:09.522359   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.593649   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.593822   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.593832   31913 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:45:09.706218   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:45:09.706232   31913 ubuntu.go:71] root file system type: overlay
	I0801 17:45:09.706373   31913 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:45:09.706456   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.777069   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.777323   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.777370   31913 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:45:09.897154   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:45:09.897240   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:09.968155   31913 main.go:134] libmachine: Using SSH client type: native
	I0801 17:45:09.968332   31913 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52050 <nil> <nil>}
	I0801 17:45:09.968348   31913 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:45:10.085677   31913 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:45:10.085692   31913 machine.go:91] provisioned docker machine in 1.42630245s
	I0801 17:45:10.085699   31913 start.go:307] post-start starting for "default-k8s-different-port-20220801174348-13911" (driver="docker")
	I0801 17:45:10.085706   31913 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:45:10.085791   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:45:10.085838   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.157469   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.242875   31913 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:45:10.246401   31913 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:45:10.246414   31913 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:45:10.246421   31913 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:45:10.246425   31913 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:45:10.246432   31913 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:45:10.246536   31913 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:45:10.246673   31913 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:45:10.246820   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:45:10.253819   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:45:10.270599   31913 start.go:310] post-start completed in 184.88853ms
	I0801 17:45:10.270679   31913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:45:10.270725   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.341693   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.424621   31913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:45:10.428724   31913 fix.go:57] fixHost completed within 2.40494101s
	I0801 17:45:10.428734   31913 start.go:82] releasing machines lock for "default-k8s-different-port-20220801174348-13911", held for 2.404972203s
	I0801 17:45:10.428805   31913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.499445   31913 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:45:10.499453   31913 ssh_runner.go:195] Run: systemctl --version
	I0801 17:45:10.499510   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.499521   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:10.577297   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.580177   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:45:10.863075   31913 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:45:10.872943   31913 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:45:10.873004   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:45:10.884327   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:45:10.896972   31913 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:45:10.964365   31913 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:45:11.037267   31913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:45:11.105843   31913 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:45:11.334865   31913 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:45:11.408996   31913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:45:11.478637   31913 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:45:11.489256   31913 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:45:11.489322   31913 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:45:11.493283   31913 start.go:471] Will wait 60s for crictl version
	I0801 17:45:11.493327   31913 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:45:11.594433   31913 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:45:11.594501   31913 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:45:11.628725   31913 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:45:11.707948   31913 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:45:11.708167   31913 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220801174348-13911 dig +short host.docker.internal
	I0801 17:45:11.835685   31913 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:45:11.835785   31913 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:45:11.839982   31913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:45:11.849128   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:11.920457   31913 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:45:11.920518   31913 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:45:11.950489   31913 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:45:11.950505   31913 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:45:11.950592   31913 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:45:11.979888   31913 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0801 17:45:11.979908   31913 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:45:11.979982   31913 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:45:12.056792   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:12.056805   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:12.056818   31913 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 17:45:12.056833   31913 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220801174348-13911 NodeName:default-k8s-different-port-20220801174348-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:45:12.056925   31913 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220801174348-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:45:12.057065   31913 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220801174348-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0801 17:45:12.057131   31913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:45:12.066061   31913 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:45:12.066148   31913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:45:12.073618   31913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0801 17:45:12.087045   31913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:45:12.099457   31913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0801 17:45:12.112836   31913 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:45:12.116178   31913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:45:12.125809   31913 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911 for IP: 192.168.67.2
	I0801 17:45:12.125918   31913 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:45:12.125966   31913 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:45:12.126040   31913 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.key
	I0801 17:45:12.126618   31913 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.key.c7fa3a9e
	I0801 17:45:12.126780   31913 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.key
	I0801 17:45:12.127193   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:45:12.127456   31913 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:45:12.127470   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:45:12.127507   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:45:12.127537   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:45:12.127568   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:45:12.127653   31913 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:45:12.128137   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:45:12.145970   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 17:45:12.162916   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:45:12.179232   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 17:45:12.195637   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:45:12.211954   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:45:12.228458   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:45:12.245049   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:45:12.273133   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:45:12.289297   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:45:12.305906   31913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:45:12.322249   31913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:45:12.334078   31913 ssh_runner.go:195] Run: openssl version
	I0801 17:45:12.338984   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:45:12.346569   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.350437   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.350479   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:45:12.355640   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:45:12.362358   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:45:12.369524   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.373094   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.373143   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:45:12.378297   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:45:12.385323   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:45:12.392716   31913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.396215   31913 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.396253   31913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:45:12.401492   31913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:45:12.408598   31913 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220801174348-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220801174348-1391
1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:45:12.408692   31913 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:45:12.437555   31913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:45:12.445130   31913 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:45:12.445143   31913 kubeadm.go:626] restartCluster start
	I0801 17:45:12.445184   31913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:45:12.451625   31913 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.451684   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:45:12.522544   31913 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220801174348-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:45:12.522712   31913 kubeconfig.go:127] "default-k8s-different-port-20220801174348-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:45:12.523108   31913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:45:12.524240   31913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:45:12.531709   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.531764   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.539797   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.740348   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.740540   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.750680   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:12.941944   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:12.942091   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:12.952401   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.141761   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.141933   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.152103   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.341140   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.341291   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.351393   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.541127   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.541267   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.550653   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.741989   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.742177   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.752445   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:13.939964   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:13.940062   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:13.949928   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.141998   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.142136   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.152691   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.340125   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.340267   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.350279   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.541428   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.541614   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.551563   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.741132   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.741260   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.751806   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:14.942014   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:14.942215   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:14.952554   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.140909   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.141047   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.151515   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.339961   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.340060   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.349894   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.539967   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.540029   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.548707   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.548716   31913 api_server.go:165] Checking apiserver status ...
	I0801 17:45:15.548755   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:45:15.556495   31913 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.556506   31913 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:45:15.556515   31913 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:45:15.556573   31913 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:45:15.588109   31913 docker.go:443] Stopping containers: [5330cf5dab78 804bfd7a4dd6 b753d3511dd1 ec2aabab3838 a079991f7e29 56f67accc23d f4047c9cc1b3 ae0ff377c871 d505ae905c0f 76bf3aba28e0 f366c63a7d21 8f26f8c13f7f 0da89e56674b f94f6bde6263 64851a902487 66e806932a2b]
	I0801 17:45:15.588183   31913 ssh_runner.go:195] Run: docker stop 5330cf5dab78 804bfd7a4dd6 b753d3511dd1 ec2aabab3838 a079991f7e29 56f67accc23d f4047c9cc1b3 ae0ff377c871 d505ae905c0f 76bf3aba28e0 f366c63a7d21 8f26f8c13f7f 0da89e56674b f94f6bde6263 64851a902487 66e806932a2b
	I0801 17:45:15.617424   31913 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:45:15.627354   31913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:45:15.634554   31913 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Aug  2 00:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:44 /etc/kubernetes/scheduler.conf
	
	I0801 17:45:15.634603   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0801 17:45:15.641371   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0801 17:45:15.648041   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0801 17:45:15.654727   31913 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.654766   31913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:45:15.661325   31913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0801 17:45:15.668099   31913 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:45:15.668152   31913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:45:15.674654   31913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:45:15.681717   31913 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:45:15.681728   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:15.726589   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.500734   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.684111   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.732262   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:16.805126   31913 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:45:16.805184   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:17.316069   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:17.815974   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:18.316161   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:45:18.326717   31913 api_server.go:71] duration metric: took 1.521564045s to wait for apiserver process to appear ...
	I0801 17:45:18.326733   31913 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:45:18.326742   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:21.129223   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:45:21.129239   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:45:21.631396   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:21.638935   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:45:21.638953   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:45:22.130245   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:22.135894   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:45:22.135911   31913 api_server.go:102] status: https://127.0.0.1:52049/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:45:22.629735   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:45:22.636164   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 200:
	ok
	I0801 17:45:22.643785   31913 api_server.go:140] control plane version: v1.24.3
	I0801 17:45:22.643800   31913 api_server.go:130] duration metric: took 4.31699607s to wait for apiserver health ...
	I0801 17:45:22.643806   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:45:22.643812   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:45:22.643822   31913 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:45:22.651516   31913 system_pods.go:59] 8 kube-system pods found
	I0801 17:45:22.651532   31913 system_pods.go:61] "coredns-6d4b75cb6d-5s86p" [e4978024-d992-4fd7-bec6-1d4cb093c4c8] Running
	I0801 17:45:22.651536   31913 system_pods.go:61] "etcd-default-k8s-different-port-20220801174348-13911" [c440b48e-48d8-4933-870b-c73df0860f90] Running
	I0801 17:45:22.651540   31913 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [e4032a9b-61fb-4493-b20a-e5d8f00382a1] Running
	I0801 17:45:22.651544   31913 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [39dbe98f-51c3-43d0-bca0-2ca31da431b5] Running
	I0801 17:45:22.651554   31913 system_pods.go:61] "kube-proxy-f7zxq" [f0307046-df65-4bb4-8bce-ddf9847f3c8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:45:22.651561   31913 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [8d33bc48-5ef3-41d2-8a6c-3fc70a048090] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 17:45:22.651568   31913 system_pods.go:61] "metrics-server-5c6f97fb75-647p7" [c842a29c-ef57-4fdd-be7a-43b9aa1f5178] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:45:22.651574   31913 system_pods.go:61] "storage-provisioner" [1b0a55a5-6df4-4f1c-a915-748eedde2dcd] Running
	I0801 17:45:22.651577   31913 system_pods.go:74] duration metric: took 7.750651ms to wait for pod list to return data ...
	I0801 17:45:22.651584   31913 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:45:22.654773   31913 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:45:22.654788   31913 node_conditions.go:123] node cpu capacity is 6
	I0801 17:45:22.654797   31913 node_conditions.go:105] duration metric: took 3.209718ms to run NodePressure ...
	I0801 17:45:22.654815   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:45:22.779173   31913 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0801 17:45:22.783016   31913 kubeadm.go:777] kubelet initialised
	I0801 17:45:22.783028   31913 kubeadm.go:778] duration metric: took 3.840293ms waiting for restarted kubelet to initialise ...
	I0801 17:45:22.783039   31913 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:45:22.798314   31913 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.803583   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.803592   31913 pod_ready.go:81] duration metric: took 5.265827ms waiting for pod "coredns-6d4b75cb6d-5s86p" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.803598   31913 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.807690   31913 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.807699   31913 pod_ready.go:81] duration metric: took 4.096609ms waiting for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.807705   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.812128   31913 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:22.812139   31913 pod_ready.go:81] duration metric: took 4.429356ms waiting for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:22.812147   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:23.049650   31913 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:23.049663   31913 pod_ready.go:81] duration metric: took 237.506184ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:23.049674   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f7zxq" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:25.452177   31913 pod_ready.go:102] pod "kube-proxy-f7zxq" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:25.956303   31913 pod_ready.go:92] pod "kube-proxy-f7zxq" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:25.956316   31913 pod_ready.go:81] duration metric: took 2.90659156s waiting for pod "kube-proxy-f7zxq" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:25.956321   31913 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:27.967784   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:29.967951   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:32.469695   31913 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:34.967596   31913 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:45:34.967609   31913 pod_ready.go:81] duration metric: took 9.011143978s waiting for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:34.967617   31913 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" ...
	I0801 17:45:36.980491   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:39.477994   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:41.479741   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:43.978790   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:45.979374   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:48.479882   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:50.978835   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:53.480508   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:55.980599   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:45:58.477821   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:00.480184   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:02.978673   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:04.979236   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:06.980442   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:09.481308   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:11.978320   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:13.981763   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:16.478127   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:18.480044   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:20.979016   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:22.979653   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:25.478419   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:27.479556   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:29.480115   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:31.980465   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:33.980626   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:36.480658   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:38.979006   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:40.981768   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:43.480457   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:45.980250   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:48.481530   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:50.978561   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:52.979664   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:54.980231   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:56.980842   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:46:58.982943   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:01.479828   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:03.482672   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:05.979086   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:07.982332   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:10.479005   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:12.480025   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:14.482235   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:16.979326   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:18.980589   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:21.482584   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:23.979972   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:25.983075   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:28.479805   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:30.482624   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:32.979818   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:34.980778   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:37.479655   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:39.480136   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:41.980390   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:44.483064   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:46.980525   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:48.982541   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:51.480766   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:53.982272   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:55.982766   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:47:58.481978   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:00.983225   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:03.481420   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:05.483218   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:07.981513   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:09.983965   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:12.482580   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:14.981866   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:16.983535   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:19.480935   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:21.483477   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:23.980733   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:25.981338   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:27.982632   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:30.482321   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:32.981657   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:34.982204   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:37.479819   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:39.482183   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:41.483395   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:43.984485   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:46.483247   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:48.484593   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:50.981872   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:52.983510   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:54.984416   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:57.482475   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:48:59.982066   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:01.983433   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:04.481808   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:06.483918   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:08.484566   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:10.981629   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:12.983255   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:14.983483   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:17.482018   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:19.483424   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:21.984065   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:24.482544   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:26.984666   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:29.483581   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:31.984749   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:34.484939   31913 pod_ready.go:102] pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace has status "Ready":"False"
	I0801 17:49:34.976048   31913 pod_ready.go:81] duration metric: took 4m0.004717233s waiting for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" ...
	E0801 17:49:34.976075   31913 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-647p7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0801 17:49:34.976092   31913 pod_ready.go:38] duration metric: took 4m12.189153798s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:49:34.976210   31913 kubeadm.go:630] restartCluster took 4m22.52701004s
	W0801 17:49:34.976332   31913 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0801 17:49:34.976363   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0801 17:49:37.337570   31913 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.361154161s)
	I0801 17:49:37.337631   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:49:37.348151   31913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:49:37.356017   31913 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 17:49:37.356067   31913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:49:37.363491   31913 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 17:49:37.363525   31913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 17:49:37.647145   31913 out.go:204]   - Generating certificates and keys ...
	I0801 17:49:38.415463   31913 out.go:204]   - Booting up control plane ...
	I0801 17:49:44.964434   31913 out.go:204]   - Configuring RBAC rules ...
	I0801 17:49:45.340117   31913 cni.go:95] Creating CNI manager for ""
	I0801 17:49:45.340131   31913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:49:45.340148   31913 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:49:45.340246   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:45.340253   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=default-k8s-different-port-20220801174348-13911 minikube.k8s.io/updated_at=2022_08_01T17_49_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:45.475719   31913 ops.go:34] apiserver oom_adj: -16
	I0801 17:49:45.475734   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:46.055191   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:46.555085   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:47.055196   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:47.555535   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:48.055243   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:48.555376   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:49.057149   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:49.556580   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:50.055221   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:50.555044   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:51.057215   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:51.555146   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:52.055363   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:52.556980   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:53.055045   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:53.555028   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:54.055942   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:54.555141   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:55.056685   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:55.555659   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:56.055753   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:56.557278   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:57.055447   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:57.555591   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.055688   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.555182   31913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 17:49:58.617518   31913 kubeadm.go:1045] duration metric: took 13.277140503s to wait for elevateKubeSystemPrivileges.
	I0801 17:49:58.617535   31913 kubeadm.go:397] StartCluster complete in 4m46.204525782s
	I0801 17:49:58.617551   31913 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:49:58.617629   31913 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:49:58.618157   31913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:49:59.134508   31913 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220801174348-13911" rescaled to 1
	I0801 17:49:59.134544   31913 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:49:59.134572   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:49:59.134599   31913 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:49:59.134722   31913 config.go:180] Loaded profile config "default-k8s-different-port-20220801174348-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:49:59.173487   31913 out.go:177] * Verifying Kubernetes components...
	I0801 17:49:59.173610   31913 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.173622   31913 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247530   31913 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247534   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0801 17:49:59.247541   31913 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:49:59.173621   31913 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247569   31913 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247592   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247570   31913 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.247619   31913 addons.go:162] addon metrics-server should already be in state true
	I0801 17:49:59.226194   31913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 17:49:59.173631   31913 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220801174348-13911"
	I0801 17:49:59.247669   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247687   31913 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.247701   31913 addons.go:162] addon dashboard should already be in state true
	I0801 17:49:59.247734   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.247986   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.248049   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.248203   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.249076   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.384922   31913 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:49:59.406550   31913 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0801 17:49:59.443540   31913 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 17:49:59.480448   31913 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:49:59.501569   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:49:59.501592   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:49:59.501594   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:49:59.501769   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.539459   31913 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:49:59.501851   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.502354   31913 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220801174348-13911"
	W0801 17:49:59.576566   31913 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:49:59.576647   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:49:59.576660   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:49:59.576678   31913 host.go:66] Checking if "default-k8s-different-port-20220801174348-13911" exists ...
	I0801 17:49:59.576764   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.580094   31913 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220801174348-13911 --format={{.State.Status}}
	I0801 17:49:59.625680   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.681280   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.688282   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.691624   31913 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:49:59.691636   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:49:59.691686   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:49:59.777185   31913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52050 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/default-k8s-different-port-20220801174348-13911/id_rsa Username:docker}
	I0801 17:49:59.831897   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:49:59.831911   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:49:59.910096   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:49:59.910110   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:49:59.919841   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:49:59.919858   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:49:59.921501   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:49:59.933746   31913 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:49:59.933762   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:50:00.011707   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:50:00.011724   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:50:00.031335   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:50:00.033068   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:50:00.036467   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:50:00.036480   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:50:00.116457   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:50:00.116470   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:50:00.214471   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:50:00.214492   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:50:00.326401   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:50:00.326442   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:50:00.401494   31913 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.153808764s)
	I0801 17:50:00.401493   31913 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.15390213s)
	I0801 17:50:00.401524   31913 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0801 17:50:00.401623   31913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220801174348-13911
	I0801 17:50:00.418882   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:50:00.418903   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:50:00.481810   31913 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220801174348-13911" to be "Ready" ...
	I0801 17:50:00.505662   31913 node_ready.go:49] node "default-k8s-different-port-20220801174348-13911" has status "Ready":"True"
	I0801 17:50:00.505675   31913 node_ready.go:38] duration metric: took 23.848502ms waiting for node "default-k8s-different-port-20220801174348-13911" to be "Ready" ...
	I0801 17:50:00.505683   31913 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:50:00.512490   31913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:00.540422   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:50:00.540439   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:50:00.612866   31913 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:50:00.612881   31913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:50:00.637798   31913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:50:00.747371   31913 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220801174348-13911"
	I0801 17:50:01.390479   31913 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:50:01.432449   31913 addons.go:414] enableAddons completed in 2.297828518s
	I0801 17:50:02.527846   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:02.527860   31913 pod_ready.go:81] duration metric: took 2.01531768s waiting for pod "coredns-6d4b75cb6d-cvnql" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:02.527869   31913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.540729   31913 pod_ready.go:92] pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.540741   31913 pod_ready.go:81] duration metric: took 2.012836849s waiting for pod "coredns-6d4b75cb6d-z8jfq" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.540747   31913 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.545243   31913 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.545251   31913 pod_ready.go:81] duration metric: took 4.4993ms waiting for pod "etcd-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.545258   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.548996   31913 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.549004   31913 pod_ready.go:81] duration metric: took 3.736506ms waiting for pod "kube-apiserver-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.549010   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.552768   31913 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.552776   31913 pod_ready.go:81] duration metric: took 3.76149ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.552782   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvn56" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.556657   31913 pod_ready.go:92] pod "kube-proxy-dvn56" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.556665   31913 pod_ready.go:81] duration metric: took 3.869516ms waiting for pod "kube-proxy-dvn56" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.556670   31913 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.940897   31913 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace has status "Ready":"True"
	I0801 17:50:04.940907   31913 pod_ready.go:81] duration metric: took 384.226091ms waiting for pod "kube-scheduler-default-k8s-different-port-20220801174348-13911" in "kube-system" namespace to be "Ready" ...
	I0801 17:50:04.940914   31913 pod_ready.go:38] duration metric: took 4.435152434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 17:50:04.940932   31913 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:50:04.940979   31913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:50:04.951301   31913 api_server.go:71] duration metric: took 5.816647694s to wait for apiserver process to appear ...
	I0801 17:50:04.951313   31913 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:50:04.951319   31913 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52049/healthz ...
	I0801 17:50:04.956817   31913 api_server.go:266] https://127.0.0.1:52049/healthz returned 200:
	ok
	I0801 17:50:04.958134   31913 api_server.go:140] control plane version: v1.24.3
	I0801 17:50:04.958144   31913 api_server.go:130] duration metric: took 6.826071ms to wait for apiserver health ...
	I0801 17:50:04.958149   31913 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:50:05.140334   31913 system_pods.go:59] 9 kube-system pods found
	I0801 17:50:05.140349   31913 system_pods.go:61] "coredns-6d4b75cb6d-cvnql" [9614734b-2bd7-4bbf-97b5-634cb4468393] Running
	I0801 17:50:05.140353   31913 system_pods.go:61] "coredns-6d4b75cb6d-z8jfq" [860c344e-4653-4582-ab6e-19ef7308526f] Running
	I0801 17:50:05.140357   31913 system_pods.go:61] "etcd-default-k8s-different-port-20220801174348-13911" [441c7722-6d7f-4385-b0b8-649b3f4ce6f2] Running
	I0801 17:50:05.140360   31913 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [97cf9337-b5ff-477d-b398-366aee9386c6] Running
	I0801 17:50:05.140364   31913 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [a457c03f-b47f-41b4-98f9-c117f334574f] Running
	I0801 17:50:05.140368   31913 system_pods.go:61] "kube-proxy-dvn56" [c67e035f-7889-4442-a7af-6972b0937045] Running
	I0801 17:50:05.140373   31913 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [c3505894-023d-4f91-baaa-6328dac164b8] Running
	I0801 17:50:05.140378   31913 system_pods.go:61] "metrics-server-5c6f97fb75-wzfjd" [43803567-1715-4fb4-9020-c9ac939c5e55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:50:05.140383   31913 system_pods.go:61] "storage-provisioner" [1e484f79-248b-4da1-a6d5-eef631825f86] Running
	I0801 17:50:05.140387   31913 system_pods.go:74] duration metric: took 182.231588ms to wait for pod list to return data ...
	I0801 17:50:05.140392   31913 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:50:05.338528   31913 default_sa.go:45] found service account: "default"
	I0801 17:50:05.338539   31913 default_sa.go:55] duration metric: took 198.14019ms for default service account to be created ...
	I0801 17:50:05.338544   31913 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 17:50:05.542082   31913 system_pods.go:86] 9 kube-system pods found
	I0801 17:50:05.542095   31913 system_pods.go:89] "coredns-6d4b75cb6d-cvnql" [9614734b-2bd7-4bbf-97b5-634cb4468393] Running
	I0801 17:50:05.542100   31913 system_pods.go:89] "coredns-6d4b75cb6d-z8jfq" [860c344e-4653-4582-ab6e-19ef7308526f] Running
	I0801 17:50:05.542103   31913 system_pods.go:89] "etcd-default-k8s-different-port-20220801174348-13911" [441c7722-6d7f-4385-b0b8-649b3f4ce6f2] Running
	I0801 17:50:05.542107   31913 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220801174348-13911" [97cf9337-b5ff-477d-b398-366aee9386c6] Running
	I0801 17:50:05.542111   31913 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220801174348-13911" [a457c03f-b47f-41b4-98f9-c117f334574f] Running
	I0801 17:50:05.542115   31913 system_pods.go:89] "kube-proxy-dvn56" [c67e035f-7889-4442-a7af-6972b0937045] Running
	I0801 17:50:05.542131   31913 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220801174348-13911" [c3505894-023d-4f91-baaa-6328dac164b8] Running
	I0801 17:50:05.542140   31913 system_pods.go:89] "metrics-server-5c6f97fb75-wzfjd" [43803567-1715-4fb4-9020-c9ac939c5e55] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:50:05.542145   31913 system_pods.go:89] "storage-provisioner" [1e484f79-248b-4da1-a6d5-eef631825f86] Running
	I0801 17:50:05.542149   31913 system_pods.go:126] duration metric: took 203.598883ms to wait for k8s-apps to be running ...
	I0801 17:50:05.542158   31913 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 17:50:05.542206   31913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 17:50:05.551638   31913 system_svc.go:56] duration metric: took 9.480244ms WaitForService to wait for kubelet.
	I0801 17:50:05.551649   31913 kubeadm.go:572] duration metric: took 6.41698891s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 17:50:05.551663   31913 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:50:05.736899   31913 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:50:05.736912   31913 node_conditions.go:123] node cpu capacity is 6
	I0801 17:50:05.736919   31913 node_conditions.go:105] duration metric: took 185.250207ms to run NodePressure ...
	I0801 17:50:05.736928   31913 start.go:216] waiting for startup goroutines ...
	I0801 17:50:05.767446   31913 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:50:05.791650   31913 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220801174348-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:45:08 UTC, end at Tue 2022-08-02 00:51:23 UTC. --
	Aug 02 00:49:36 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:49:36.986723879Z" level=info msg="ignoring event" container=51dcc87bf156069ed5b022267ec851df4cf21ffce110252251530c614fca5211 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:01 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:01.815489273Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:01 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:01.815564570Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:01 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:01.816593777Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:02 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:02.775203273Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Aug 02 00:50:05 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:05.758687115Z" level=info msg="ignoring event" container=2e61157a30011a3009a6eef9923faeb2a202b4d3e06188a155030ed990235169 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:05 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:05.807266611Z" level=info msg="ignoring event" container=49a0d13a8074ab0ce6f0943a086fd3ad302f60f82a71c14523b86d3bd7ea0dee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:07 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:07.971499991Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3\": Get \"https://auth.docker.io/token?scope=repository%!A(MISSING)kubernetesui%!F(MISSING)dashboard%!A(MISSING)pull&service=registry.docker.io\": EOF"
	Aug 02 00:50:07 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:07.972836759Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3\": Get \"https://auth.docker.io/token?scope=repository%!A(MISSING)kubernetesui%!F(MISSING)dashboard%!A(MISSING)pull&service=registry.docker.io\": EOF"
	Aug 02 00:50:08 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:08.618261274Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:50:08 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:08.914613846Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Aug 02 00:50:12 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:12.129967689Z" level=info msg="ignoring event" container=d60f488f17fe1065b06858a1c6016e8439232b28c5fb8330de3430cf9c0816e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:13 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:13.142482770Z" level=info msg="ignoring event" container=2238d514a7cff6225d6963caefab0e8d5062112646258c5d0f4f8339cc02108c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:14 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:14.329383120Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:14 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:14.329768932Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:14 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:14.330982199Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:24 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:24.586937218Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Aug 02 00:50:36 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:36.423189840Z" level=info msg="ignoring event" container=aab57476fd3b066d1b9d15fa2972041db92bab6024b1563609cebb49ac733d07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:50:42 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:42.305008080Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:42 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:42.305066077Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:50:42 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:50:42.378976092Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:51:20.136257886Z" level=info msg="ignoring event" container=e409413b95ee5c06865c14a5f35513c6804a5fa22f45164ab3f2c0918cfbc7e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:51:20.137645812Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:51:20.137686097Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 dockerd[515]: time="2022-08-02T00:51:20.138861057Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	e409413b95ee5       a90209bb39e3d                                                                                    4 seconds ago        Exited              dashboard-metrics-scraper   3                   eab0368196dfb
	9f56fd824bc5f       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   54 seconds ago       Running             kubernetes-dashboard        0                   fb87cf0fcc1f7
	b279da282f80f       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   ed69aa184eb16
	027d6efe72a6a       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   209dc38a5e767
	e8bc638faf651       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   06a4f994ca171
	ebef0cd649b37       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   ea4e7e98257fb
	e40d954d1f100       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   1a8045f98b3b1
	1daa6699c3713       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   3c47cda00da87
	37b405b118d92       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   dfa1fa748d1c3
	
	* 
	* ==> coredns [027d6efe72a6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220801174348-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220801174348-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=default-k8s-different-port-20220801174348-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_49_45_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:49:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220801174348-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:51:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:51:16 +0000   Tue, 02 Aug 2022 00:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:51:16 +0000   Tue, 02 Aug 2022 00:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:51:16 +0000   Tue, 02 Aug 2022 00:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Aug 2022 00:51:16 +0000   Tue, 02 Aug 2022 00:49:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20220801174348-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                1bcc3bc9-f0da-4ff3-aea8-f9de709d8302
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-z8jfq                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     85s
	  kube-system                 etcd-default-k8s-different-port-20220801174348-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220801174348-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220801174348-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-dvn56                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220801174348-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 metrics-server-5c6f97fb75-wzfjd                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-8jj4s                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-49lj4                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 84s   kube-proxy       
	  Normal  Starting                 98s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  98s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s   kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s   kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s   kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasSufficientPID
	  Normal  NodeReady                98s   kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeReady
	  Normal  RegisteredNode           86s   node-controller  Node default-k8s-different-port-20220801174348-13911 event: Registered Node default-k8s-different-port-20220801174348-13911 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node default-k8s-different-port-20220801174348-13911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [ebef0cd649b3] <==
	* {"level":"info","ts":"2022-08-02T00:49:39.827Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:49:39.827Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:49:40.266Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20220801174348-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:49:40.267Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:49:40.268Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:49:40.268Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:49:40.268Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:49:40.269Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:50:07.436Z","caller":"traceutil/trace.go:171","msg":"trace[1333753144] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"101.966213ms","start":"2022-08-02T00:50:07.334Z","end":"2022-08-02T00:50:07.435Z","steps":["trace[1333753144] 'process raft request'  (duration: 26.42704ms)","trace[1333753144] 'compare'  (duration: 75.069696ms)"],"step_count":2}
	{"level":"warn","ts":"2022-08-02T00:51:20.114Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.016443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-49lj4\" ","response":"range_response_count:1 size:3933"}
	{"level":"info","ts":"2022-08-02T00:51:20.114Z","caller":"traceutil/trace.go:171","msg":"trace[1310099285] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-49lj4; range_end:; response_count:1; response_revision:596; }","duration":"145.571737ms","start":"2022-08-02T00:51:19.969Z","end":"2022-08-02T00:51:20.114Z","steps":["trace[1310099285] 'range keys from in-memory index tree'  (duration: 144.977521ms)"],"step_count":1}
	{"level":"warn","ts":"2022-08-02T00:51:20.115Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"140.889764ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289942267317697285 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-5c6f97fb75-wzfjd.170760daa9779d32\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-5c6f97fb75-wzfjd.170760daa9779d32\" value_size:659 lease:2289942267317697224 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-08-02T00:51:20.116Z","caller":"traceutil/trace.go:171","msg":"trace[518772032] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"142.914961ms","start":"2022-08-02T00:51:19.973Z","end":"2022-08-02T00:51:20.116Z","steps":["trace[518772032] 'compare'  (duration: 140.812135ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  00:51:24 up  1:16,  0 users,  load average: 0.82, 0.70, 0.81
	Linux default-k8s-different-port-20220801174348-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [37b405b118d9] <==
	* I0802 00:49:44.636191       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:49:45.164715       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:49:45.170551       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0802 00:49:45.178286       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:49:45.266978       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:49:58.141389       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0802 00:49:58.190667       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0802 00:49:58.734169       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:50:00.752564       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.101.55.90]
	I0802 00:50:01.365674       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.106.198.174]
	I0802 00:50:01.374405       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.104.203.13]
	W0802 00:50:01.651309       1 handler_proxy.go:102] no RequestInfo found in the context
	W0802 00:50:01.651334       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:50:01.651350       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:50:01.651355       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0802 00:50:01.651357       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:50:01.652615       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:51:15.969039       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:51:15.969055       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:51:15.969060       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:51:15.969709       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:51:15.969730       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:51:15.970064       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1daa6699c371] <==
	* I0802 00:49:58.649022       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0802 00:49:58.655289       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-cvnql"
	I0802 00:50:00.635755       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0802 00:50:00.639551       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0802 00:50:00.643873       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0802 00:50:00.650832       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-wzfjd"
	I0802 00:50:01.277624       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0802 00:50:01.282749       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:50:01.287293       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0802 00:50:01.287662       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:50:01.292720       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:50:01.293052       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:50:01.293095       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:50:01.294970       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:50:01.295061       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0802 00:50:01.297074       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0802 00:50:01.299645       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 00:50:01.299719       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0802 00:50:01.319643       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-49lj4"
	I0802 00:50:01.320138       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-8jj4s"
	W0802 00:50:06.995085       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0802 00:50:27.606462       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0802 00:50:28.104257       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0802 00:51:16.086708       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0802 00:51:16.156895       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e8bc638faf65] <==
	* I0802 00:49:58.704174       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:49:58.704231       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:49:58.704269       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:49:58.730963       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:49:58.731000       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:49:58.731008       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:49:58.731017       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:49:58.731035       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:49:58.731181       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:49:58.731417       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:49:58.731445       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:49:58.731854       1 config.go:317] "Starting service config controller"
	I0802 00:49:58.731899       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:49:58.731915       1 config.go:444] "Starting node config controller"
	I0802 00:49:58.731918       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:49:58.732287       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:49:58.732316       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:49:58.832334       1 shared_informer.go:262] Caches are synced for node config
	I0802 00:49:58.832399       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:49:58.832414       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [e40d954d1f10] <==
	* E0802 00:49:42.547353       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 00:49:42.545791       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:49:42.547360       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:49:42.547236       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 00:49:42.547586       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:49:42.547616       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 00:49:43.369159       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 00:49:43.369195       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 00:49:43.369233       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 00:49:43.369240       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 00:49:43.395309       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 00:49:43.395344       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 00:49:43.473103       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 00:49:43.473157       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 00:49:43.555960       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 00:49:43.555978       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 00:49:43.622150       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 00:49:43.622239       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 00:49:43.669316       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 00:49:43.669334       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 00:49:43.673244       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 00:49:43.673352       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 00:49:43.698477       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:49:43.698632       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0802 00:49:46.940826       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:45:08 UTC, end at Tue 2022-08-02 00:51:24 UTC. --
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590761    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzmd\" (UniqueName: \"kubernetes.io/projected/1e484f79-248b-4da1-a6d5-eef631825f86-kube-api-access-dbzmd\") pod \"storage-provisioner\" (UID: \"1e484f79-248b-4da1-a6d5-eef631825f86\") " pod="kube-system/storage-provisioner"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590779    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e200b550-e12b-448d-a50e-7c3e4b390f31-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-8jj4s\" (UID: \"e200b550-e12b-448d-a50e-7c3e4b390f31\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-8jj4s"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590795    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8w4\" (UniqueName: \"kubernetes.io/projected/860c344e-4653-4582-ab6e-19ef7308526f-kube-api-access-pr8w4\") pod \"coredns-6d4b75cb6d-z8jfq\" (UID: \"860c344e-4653-4582-ab6e-19ef7308526f\") " pod="kube-system/coredns-6d4b75cb6d-z8jfq"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590811    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c67e035f-7889-4442-a7af-6972b0937045-lib-modules\") pod \"kube-proxy-dvn56\" (UID: \"c67e035f-7889-4442-a7af-6972b0937045\") " pod="kube-system/kube-proxy-dvn56"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590826    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/180a2c1b-6569-45b1-8704-8dd02927b1bd-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-49lj4\" (UID: \"180a2c1b-6569-45b1-8704-8dd02927b1bd\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-49lj4"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590840    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/43803567-1715-4fb4-9020-c9ac939c5e55-tmp-dir\") pod \"metrics-server-5c6f97fb75-wzfjd\" (UID: \"43803567-1715-4fb4-9020-c9ac939c5e55\") " pod="kube-system/metrics-server-5c6f97fb75-wzfjd"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590892    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp297\" (UniqueName: \"kubernetes.io/projected/c67e035f-7889-4442-a7af-6972b0937045-kube-api-access-wp297\") pod \"kube-proxy-dvn56\" (UID: \"c67e035f-7889-4442-a7af-6972b0937045\") " pod="kube-system/kube-proxy-dvn56"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590922    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sckf\" (UniqueName: \"kubernetes.io/projected/180a2c1b-6569-45b1-8704-8dd02927b1bd-kube-api-access-6sckf\") pod \"kubernetes-dashboard-5fd5574d9f-49lj4\" (UID: \"180a2c1b-6569-45b1-8704-8dd02927b1bd\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-49lj4"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590953    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e484f79-248b-4da1-a6d5-eef631825f86-tmp\") pod \"storage-provisioner\" (UID: \"1e484f79-248b-4da1-a6d5-eef631825f86\") " pod="kube-system/storage-provisioner"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.590972    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c67e035f-7889-4442-a7af-6972b0937045-kube-proxy\") pod \"kube-proxy-dvn56\" (UID: \"c67e035f-7889-4442-a7af-6972b0937045\") " pod="kube-system/kube-proxy-dvn56"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.591033    9979 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x44h\" (UniqueName: \"kubernetes.io/projected/e200b550-e12b-448d-a50e-7c3e4b390f31-kube-api-access-7x44h\") pod \"dashboard-metrics-scraper-dffd48c4c-8jj4s\" (UID: \"e200b550-e12b-448d-a50e-7c3e4b390f31\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-8jj4s"
	Aug 02 00:51:17 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:17.591135    9979 reconciler.go:157] "Reconciler: start to sync state"
	Aug 02 00:51:18 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:18.766490    9979 request.go:601] Waited for 1.166528284s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Aug 02 00:51:18 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:18.824633    9979 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220801174348-13911\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220801174348-13911"
	Aug 02 00:51:18 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:18.973870    9979 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220801174348-13911\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220801174348-13911"
	Aug 02 00:51:19 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:19.170869    9979 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220801174348-13911\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220801174348-13911"
	Aug 02 00:51:19 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:19.440950    9979 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220801174348-13911\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220801174348-13911"
	Aug 02 00:51:19 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:19.670511    9979 scope.go:110] "RemoveContainer" containerID="aab57476fd3b066d1b9d15fa2972041db92bab6024b1563609cebb49ac733d07"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.139345    9979 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.139413    9979 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.139583    9979 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d2g7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fil
e,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-wzfjd_kube-system(43803567-1715-4fb4-9020-c9ac939c5e55): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.139613    9979 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-wzfjd" podUID=43803567-1715-4fb4-9020-c9ac939c5e55
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:20.623183    9979 scope.go:110] "RemoveContainer" containerID="aab57476fd3b066d1b9d15fa2972041db92bab6024b1563609cebb49ac733d07"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: I0802 00:51:20.623365    9979 scope.go:110] "RemoveContainer" containerID="e409413b95ee5c06865c14a5f35513c6804a5fa22f45164ab3f2c0918cfbc7e2"
	Aug 02 00:51:20 default-k8s-different-port-20220801174348-13911 kubelet[9979]: E0802 00:51:20.623509    9979 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-8jj4s_kubernetes-dashboard(e200b550-e12b-448d-a50e-7c3e4b390f31)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-8jj4s" podUID=e200b550-e12b-448d-a50e-7c3e4b390f31
	
	* 
	* ==> kubernetes-dashboard [9f56fd824bc5] <==
	* 2022/08/02 00:50:29 Starting overwatch
	2022/08/02 00:50:29 Using namespace: kubernetes-dashboard
	2022/08/02 00:50:29 Using in-cluster config to connect to apiserver
	2022/08/02 00:50:29 Using secret token for csrf signing
	2022/08/02 00:50:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/08/02 00:50:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/08/02 00:50:29 Successful initial request to the apiserver, version: v1.24.3
	2022/08/02 00:50:29 Generating JWE encryption key
	2022/08/02 00:50:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/08/02 00:50:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/08/02 00:50:29 Initializing JWE encryption key from synchronized object
	2022/08/02 00:50:29 Creating in-cluster Sidecar client
	2022/08/02 00:50:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/08/02 00:50:29 Serving insecurely on HTTP port: 9090
	2022/08/02 00:51:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [b279da282f80] <==
	* I0802 00:50:01.655623       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 00:50:01.664007       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 00:50:01.664055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 00:50:01.669321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 00:50:01.669442       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220801174348-13911_ae20c346-a628-4ee6-869e-4b781f24b010!
	I0802 00:50:01.669749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4378a963-0a3f-49bb-9d97-2fb63b088c26", APIVersion:"v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220801174348-13911_ae20c346-a628-4ee6-869e-4b781f24b010 became leader
	I0802 00:50:01.769623       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220801174348-13911_ae20c346-a628-4ee6-869e-4b781f24b010!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220801174348-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-wzfjd
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220801174348-13911 describe pod metrics-server-5c6f97fb75-wzfjd
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220801174348-13911 describe pod metrics-server-5c6f97fb75-wzfjd: exit status 1 (288.932705ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-wzfjd" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220801174348-13911 describe pod metrics-server-5c6f97fb75-wzfjd: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (43.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0801 17:50:54.826905   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:51:40.584050   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:52:01.351681   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:52:02.210119   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:52:23.218775   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:52:37.582315   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:52:39.220508   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:52:50.404300   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:52:50.912189   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:53:04.541182   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:54:02.290099   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:54:43.414786   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:43.418547   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:54:43.420486   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:43.430909   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:43.451248   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:43.492211   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:43.573441   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:43.733759   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:44.054440   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:44.696795   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:45.977215   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:54:48.539585   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:54:53.660949   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:55:03.901180   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:55:12.037370   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:55:17.070763   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:55:24.383058   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:55:54.842100   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:56:05.345190   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:56:40.601283   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0801 17:57:01.361625   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:57:02.219695   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50783/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0801 17:57:23.223988   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0801 17:57:27.266208   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0801 17:57:37.588153   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0801 17:57:39.224798   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0801 17:57:50.407765   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0801 17:58:04.546148   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0801 17:59:43.417742   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/default-k8s-different-port-20220801174348-13911/client.crt: no such file or directory
E0801 17:59:43.421939   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:59:43.652141   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 2 (429.347928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-20220801172716-13911" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220801172716-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220801172716-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.078µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220801172716-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220801172716-13911
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220801172716-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6",
	        "Created": "2022-08-02T00:27:24.523444703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246645,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:33:03.548358911Z",
	            "FinishedAt": "2022-08-02T00:33:00.53307201Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hostname",
	        "HostsPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/hosts",
	        "LogPath": "/var/lib/docker/containers/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6/dfb69a53356501367b36570fa5295eb3938cba435699a600a5f053a9987c8da6-json.log",
	        "Name": "/old-k8s-version-20220801172716-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220801172716-13911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220801172716-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16e7475a8db87a85da2e4c2cdbd8c60ad2dc372d0612a8d1bdd09d7a15b771ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220801172716-13911",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220801172716-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220801172716-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220801172716-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7033b72c7cb5dd94daf6f66da715470e46ad00b0bd6f037aa3061302fc290971",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50784"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50785"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50787"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50783"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7033b72c7cb5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220801172716-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dfb69a533565",
	                        "old-k8s-version-20220801172716-13911"
	                    ],
	                    "NetworkID": "947fc21b2e0fc27b09dd4dd43b477927d08a61d441a541fee2a6fa712bca71b9",
	                    "EndpointID": "a3b831dd7b0090943b49fd33eab9fa69501e40c1e99428d6b52499a1a33c63e3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 2 (427.852308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220801172716-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220801172716-13911 logs -n 25: (3.481546736s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220801175129-13911 --memory=2200           | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:52 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220801175129-13911 --memory=2200           | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:53 PDT | 01 Aug 22 17:53 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:53 PDT | 01 Aug 22 17:53 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:53 PDT | 01 Aug 22 17:53 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:52:23
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:52:23.673228   32787 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:52:23.673420   32787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:52:23.673426   32787 out.go:309] Setting ErrFile to fd 2...
	I0801 17:52:23.673430   32787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:52:23.673533   32787 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:52:23.673982   32787 out.go:303] Setting JSON to false
	I0801 17:52:23.688935   32787 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10314,"bootTime":1659391229,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:52:23.689050   32787 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:52:23.710547   32787 out.go:177] * [newest-cni-20220801175129-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:52:23.732744   32787 notify.go:193] Checking for updates...
	I0801 17:52:23.754303   32787 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:52:23.776387   32787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:23.797749   32787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:52:23.819483   32787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:52:23.841446   32787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:52:23.863222   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:23.863894   32787 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:52:23.933038   32787 docker.go:137] docker version: linux-20.10.17
	I0801 17:52:23.933180   32787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:52:24.067710   32787 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:52:24.000385977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:52:24.109876   32787 out.go:177] * Using the docker driver based on existing profile
	I0801 17:52:24.130849   32787 start.go:284] selected driver: docker
	I0801 17:52:24.130895   32787 start.go:808] validating driver "docker" against &{Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:24.131069   32787 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:52:24.134420   32787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:52:24.267922   32787 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:52:24.202408606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:52:24.268091   32787 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0801 17:52:24.268108   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:24.268117   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:24.268125   32787 start_flags.go:310] config:
	{Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:24.289987   32787 out.go:177] * Starting control plane node newest-cni-20220801175129-13911 in cluster newest-cni-20220801175129-13911
	I0801 17:52:24.311931   32787 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:52:24.333756   32787 out.go:177] * Pulling base image ...
	I0801 17:52:24.375960   32787 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:52:24.375962   32787 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:52:24.376098   32787 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 17:52:24.376118   32787 cache.go:57] Caching tarball of preloaded images
	I0801 17:52:24.376312   32787 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:52:24.377040   32787 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0801 17:52:24.379371   32787 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/config.json ...
	I0801 17:52:24.441569   32787 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:52:24.441586   32787 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:52:24.441597   32787 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:52:24.441636   32787 start.go:371] acquiring machines lock for newest-cni-20220801175129-13911: {Name:mk442d39e1f1a32a0afed4f835844094a50c76c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:52:24.441709   32787 start.go:375] acquired machines lock for "newest-cni-20220801175129-13911" in 57.497µs
	I0801 17:52:24.441728   32787 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:52:24.441736   32787 fix.go:55] fixHost starting: 
	I0801 17:52:24.441948   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:24.509430   32787 fix.go:103] recreateIfNeeded on newest-cni-20220801175129-13911: state=Stopped err=<nil>
	W0801 17:52:24.509459   32787 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:52:24.531450   32787 out.go:177] * Restarting existing docker container for "newest-cni-20220801175129-13911" ...
	I0801 17:52:24.553225   32787 cli_runner.go:164] Run: docker start newest-cni-20220801175129-13911
	I0801 17:52:24.902900   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:24.976389   32787 kic.go:415] container "newest-cni-20220801175129-13911" state is running.
	I0801 17:52:24.977146   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:25.050538   32787 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/config.json ...
	I0801 17:52:25.050942   32787 machine.go:88] provisioning docker machine ...
	I0801 17:52:25.050970   32787 ubuntu.go:169] provisioning hostname "newest-cni-20220801175129-13911"
	I0801 17:52:25.051043   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.126021   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.126221   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.126238   32787 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220801175129-13911 && echo "newest-cni-20220801175129-13911" | sudo tee /etc/hostname
	I0801 17:52:25.248738   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220801175129-13911
	
	I0801 17:52:25.248824   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.321597   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.321746   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.321766   32787 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220801175129-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220801175129-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220801175129-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:52:25.434932   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:52:25.434954   32787 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:52:25.434989   32787 ubuntu.go:177] setting up certificates
	I0801 17:52:25.435000   32787 provision.go:83] configureAuth start
	I0801 17:52:25.435069   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:25.507735   32787 provision.go:138] copyHostCerts
	I0801 17:52:25.507826   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:52:25.507858   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:52:25.507959   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:52:25.508136   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:52:25.508145   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:52:25.508210   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:52:25.508393   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:52:25.508399   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:52:25.508476   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:52:25.508593   32787 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220801175129-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220801175129-13911]
	I0801 17:52:25.689638   32787 provision.go:172] copyRemoteCerts
	I0801 17:52:25.689698   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:52:25.689743   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.760221   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:25.842816   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:52:25.859614   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0801 17:52:25.877787   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 17:52:25.894499   32787 provision.go:86] duration metric: configureAuth took 459.438174ms
	I0801 17:52:25.894511   32787 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:52:25.894655   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:25.894705   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.965303   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.965461   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.965476   32787 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:52:26.081993   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:52:26.082006   32787 ubuntu.go:71] root file system type: overlay
	I0801 17:52:26.082200   32787 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:52:26.082279   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.152899   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:26.153058   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:26.153124   32787 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:52:26.273565   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:52:26.273643   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.344500   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:26.344674   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:26.344687   32787 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:52:26.461478   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:52:26.461494   32787 machine.go:91] provisioned docker machine in 1.410404064s
	I0801 17:52:26.461511   32787 start.go:307] post-start starting for "newest-cni-20220801175129-13911" (driver="docker")
	I0801 17:52:26.461517   32787 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:52:26.461594   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:52:26.461638   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.533240   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.617860   32787 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:52:26.621187   32787 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:52:26.621203   32787 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:52:26.621209   32787 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:52:26.621218   32787 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:52:26.621226   32787 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:52:26.621342   32787 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:52:26.621472   32787 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:52:26.621619   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:52:26.628457   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:52:26.645485   32787 start.go:310] post-start completed in 183.946981ms
	I0801 17:52:26.645554   32787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:52:26.645614   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.716259   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.799945   32787 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:52:26.804224   32787 fix.go:57] fixHost completed within 2.362249069s
	I0801 17:52:26.804235   32787 start.go:82] releasing machines lock for "newest-cni-20220801175129-13911", held for 2.362281129s
	I0801 17:52:26.804320   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:26.873689   32787 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:52:26.873695   32787 ssh_runner.go:195] Run: systemctl --version
	I0801 17:52:26.873749   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.873757   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.947817   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.950793   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:27.027323   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0801 17:52:27.218560   32787 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0801 17:52:27.231317   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:27.308327   32787 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0801 17:52:27.385719   32787 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:52:27.395516   32787 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:52:27.395572   32787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:52:27.404611   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:52:27.416899   32787 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:52:27.490191   32787 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:52:27.556237   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:27.626123   32787 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:52:27.863853   32787 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:52:27.937016   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:28.004508   32787 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:52:28.013670   32787 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:52:28.013735   32787 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:52:28.017472   32787 start.go:471] Will wait 60s for crictl version
	I0801 17:52:28.017516   32787 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:52:28.045591   32787 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:52:28.045657   32787 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:52:28.081103   32787 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:52:28.139569   32787 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:52:28.139751   32787 cli_runner.go:164] Run: docker exec -t newest-cni-20220801175129-13911 dig +short host.docker.internal
	I0801 17:52:28.271354   32787 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:52:28.271611   32787 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:52:28.276062   32787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:52:28.285339   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:28.377645   32787 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0801 17:52:28.400051   32787 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:52:28.400188   32787 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:52:28.430349   32787 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0801 17:52:28.430364   32787 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:52:28.430425   32787 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:52:28.459887   32787 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0801 17:52:28.459907   32787 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:52:28.459999   32787 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:52:28.538859   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:28.538872   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:28.538886   32787 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0801 17:52:28.538897   32787 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220801175129-13911 NodeName:newest-cni-20220801175129-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:52:28.539006   32787 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220801175129-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:52:28.539092   32787 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220801175129-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:52:28.539154   32787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:52:28.547103   32787 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:52:28.547163   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:52:28.554183   32787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0801 17:52:28.566838   32787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:52:28.579088   32787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0801 17:52:28.592069   32787 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:52:28.595735   32787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:52:28.605024   32787 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911 for IP: 192.168.67.2
	I0801 17:52:28.605135   32787 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:52:28.605189   32787 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:52:28.605266   32787 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/client.key
	I0801 17:52:28.605323   32787 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.key.c7fa3a9e
	I0801 17:52:28.605376   32787 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.key
	I0801 17:52:28.606246   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:52:28.606339   32787 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:52:28.606357   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:52:28.606485   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:52:28.606564   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:52:28.606614   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:52:28.606880   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:52:28.607387   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:52:28.624412   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 17:52:28.641432   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:52:28.657666   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:52:28.674229   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:52:28.719270   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:52:28.736654   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:52:28.753158   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:52:28.770469   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:52:28.787048   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:52:28.803970   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:52:28.821159   32787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:52:28.833579   32787 ssh_runner.go:195] Run: openssl version
	I0801 17:52:28.839216   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:52:28.846925   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.850785   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.850830   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.855816   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:52:28.862898   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:52:28.870230   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.874033   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.874072   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.879145   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:52:28.886176   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:52:28.893778   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.897545   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.897586   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.902789   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:52:28.910029   32787 kubeadm.go:395] StartCluster: {Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:28.910133   32787 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:52:28.938635   32787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:52:28.946148   32787 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:52:28.946164   32787 kubeadm.go:626] restartCluster start
	I0801 17:52:28.946212   32787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:52:28.953014   32787 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:28.953076   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:29.024288   32787 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220801175129-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:29.024479   32787 kubeconfig.go:127] "newest-cni-20220801175129-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:52:29.024804   32787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:29.026040   32787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:52:29.033882   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.033956   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.042444   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.243382   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.243445   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.252434   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.444705   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.444803   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.455884   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.644689   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.644878   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.656009   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.844691   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.844898   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.855271   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.044484   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.044572   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.054992   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.244551   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.244703   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.256059   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.443337   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.443520   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.454029   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.642687   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.642858   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.653207   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.842691   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.842787   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.851677   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.044755   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.044936   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.056082   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.244798   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.244988   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.255847   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.442880   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.442988   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.453367   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.642749   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.642915   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.652975   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.844827   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.845046   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.855350   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.043498   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:32.043651   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:32.053903   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.053914   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:32.053961   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:32.061661   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.061673   32787 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:52:32.061681   32787 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:52:32.061735   32787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:52:32.091970   32787 docker.go:443] Stopping containers: [504a8b59e1ce 6ada3ab8487d e7ffafb0ce3f 46e43480cef2 28e90bf32a64 6686f00cb0ec 0da10eabf430 9e2b4b1800e1 ed072705134c ae7511f543c8 af02fe8a2673 42d0d44d7c6f d698d4a20553 06a54abbd12b aeff65b18cdf d9acb50e1a8c]
	I0801 17:52:32.092047   32787 ssh_runner.go:195] Run: docker stop 504a8b59e1ce 6ada3ab8487d e7ffafb0ce3f 46e43480cef2 28e90bf32a64 6686f00cb0ec 0da10eabf430 9e2b4b1800e1 ed072705134c ae7511f543c8 af02fe8a2673 42d0d44d7c6f d698d4a20553 06a54abbd12b aeff65b18cdf d9acb50e1a8c
	I0801 17:52:32.121345   32787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:52:32.131474   32787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:52:32.139435   32787 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug  2 00:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:51 /etc/kubernetes/scheduler.conf
	
	I0801 17:52:32.139495   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:52:32.146996   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:52:32.154372   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:52:32.161557   32787 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.161606   32787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:52:32.168595   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:52:32.175658   32787 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.175708   32787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:52:32.182506   32787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:52:32.190951   32787 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:52:32.190967   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:32.240435   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:32.997584   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.179336   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.228816   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.285208   32787 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:52:33.285269   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:33.818695   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:34.318201   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:34.329612   32787 api_server.go:71] duration metric: took 1.044336555s to wait for apiserver process to appear ...
	I0801 17:52:34.329626   32787 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:52:34.329634   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:34.330719   32787 api_server.go:256] stopped: https://127.0.0.1:53000/healthz: Get "https://127.0.0.1:53000/healthz": EOF
	I0801 17:52:34.831850   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:37.477511   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:52:37.477526   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:52:37.831072   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:37.839203   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:52:37.839219   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:52:38.331454   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:38.338440   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:52:38.338453   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:52:38.831673   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:38.839175   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 200:
	ok
	I0801 17:52:38.845486   32787 api_server.go:140] control plane version: v1.24.3
	I0801 17:52:38.845498   32787 api_server.go:130] duration metric: took 4.515618355s to wait for apiserver health ...
	I0801 17:52:38.845504   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:38.845508   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:38.845520   32787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:52:38.851916   32787 system_pods.go:59] 8 kube-system pods found
	I0801 17:52:38.851934   32787 system_pods.go:61] "coredns-6d4b75cb6d-cs7mc" [c15c9885-12b6-401a-80b5-306326ed8760] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0801 17:52:38.851947   32787 system_pods.go:61] "etcd-newest-cni-20220801175129-13911" [6c0faf34-6ed0-45fb-8af0-d822ee539d57] Running
	I0801 17:52:38.851952   32787 system_pods.go:61] "kube-apiserver-newest-cni-20220801175129-13911" [faf7abbe-9d33-4c77-89e7-5ee799592377] Running
	I0801 17:52:38.851956   32787 system_pods.go:61] "kube-controller-manager-newest-cni-20220801175129-13911" [eb59c99e-98e9-44e8-bf4c-d8237aaa34ab] Running
	I0801 17:52:38.851961   32787 system_pods.go:61] "kube-proxy-2pmw7" [b621ae1b-52fc-4d15-b7bd-b6b9d074d419] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:52:38.851966   32787 system_pods.go:61] "kube-scheduler-newest-cni-20220801175129-13911" [c70c5eb8-13e4-400c-aa52-2a94e85f0c5e] Running
	I0801 17:52:38.851970   32787 system_pods.go:61] "metrics-server-5c6f97fb75-qwvtt" [6f1f27bb-dc60-477b-9476-b02a8d1c7b00] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:52:38.851975   32787 system_pods.go:61] "storage-provisioner" [bfbcaa76-3903-4a2c-9081-426d2c26ec38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:52:38.851979   32787 system_pods.go:74] duration metric: took 6.454438ms to wait for pod list to return data ...
	I0801 17:52:38.851985   32787 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:52:38.854828   32787 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:52:38.854841   32787 node_conditions.go:123] node cpu capacity is 6
	I0801 17:52:38.854849   32787 node_conditions.go:105] duration metric: took 2.86028ms to run NodePressure ...
	I0801 17:52:38.854858   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:39.017659   32787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:52:39.028040   32787 ops.go:34] apiserver oom_adj: -16
	I0801 17:52:39.028053   32787 kubeadm.go:630] restartCluster took 10.081237328s
	I0801 17:52:39.028063   32787 kubeadm.go:397] StartCluster complete in 10.117389461s
	I0801 17:52:39.028076   32787 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:39.028149   32787 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:39.028753   32787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:39.032857   32787 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220801175129-13911" rescaled to 1
	I0801 17:52:39.032920   32787 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:52:39.032937   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:52:39.033003   32787 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:52:39.033212   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:39.058242   32787 out.go:177] * Verifying Kubernetes components...
	I0801 17:52:39.058371   32787 addons.go:65] Setting dashboard=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.094923   32787 addons.go:153] Setting addon dashboard=true in "newest-cni-20220801175129-13911"
	I0801 17:52:39.058373   32787 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.094932   32787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0801 17:52:39.094946   32787 addons.go:162] addon dashboard should already be in state true
	I0801 17:52:39.094970   32787 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220801175129-13911"
	W0801 17:52:39.094990   32787 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:52:39.058371   32787 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.095021   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.058385   32787 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.095060   32787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220801175129-13911"
	I0801 17:52:39.095084   32787 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220801175129-13911"
	W0801 17:52:39.095103   32787 addons.go:162] addon metrics-server should already be in state true
	I0801 17:52:39.095109   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.095175   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.095545   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.095796   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.097115   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.097117   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.146335   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.146341   32787 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0801 17:52:39.235029   32787 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220801175129-13911"
	I0801 17:52:39.251541   32787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:52:39.271211   32787 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	W0801 17:52:39.271269   32787 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:52:39.308909   32787 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:52:39.330553   32787 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 17:52:39.330575   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:52:39.330621   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.406629   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:52:39.427343   32787 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:52:39.427370   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:52:39.407092   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.465342   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:52:39.465354   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:52:39.406765   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.427563   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.460245   32787 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:52:39.465420   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.465467   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:39.484866   32787 api_server.go:71] duration metric: took 451.841665ms to wait for apiserver process to appear ...
	I0801 17:52:39.484924   32787 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:52:39.484971   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:39.494690   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 200:
	ok
	I0801 17:52:39.497264   32787 api_server.go:140] control plane version: v1.24.3
	I0801 17:52:39.497280   32787 api_server.go:130] duration metric: took 12.34487ms to wait for apiserver health ...
	I0801 17:52:39.497288   32787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:52:39.508373   32787 system_pods.go:59] 8 kube-system pods found
	I0801 17:52:39.508410   32787 system_pods.go:61] "coredns-6d4b75cb6d-cs7mc" [c15c9885-12b6-401a-80b5-306326ed8760] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0801 17:52:39.508419   32787 system_pods.go:61] "etcd-newest-cni-20220801175129-13911" [6c0faf34-6ed0-45fb-8af0-d822ee539d57] Running
	I0801 17:52:39.508427   32787 system_pods.go:61] "kube-apiserver-newest-cni-20220801175129-13911" [faf7abbe-9d33-4c77-89e7-5ee799592377] Running
	I0801 17:52:39.508439   32787 system_pods.go:61] "kube-controller-manager-newest-cni-20220801175129-13911" [eb59c99e-98e9-44e8-bf4c-d8237aaa34ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0801 17:52:39.508448   32787 system_pods.go:61] "kube-proxy-2pmw7" [b621ae1b-52fc-4d15-b7bd-b6b9d074d419] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:52:39.508455   32787 system_pods.go:61] "kube-scheduler-newest-cni-20220801175129-13911" [c70c5eb8-13e4-400c-aa52-2a94e85f0c5e] Running
	I0801 17:52:39.508464   32787 system_pods.go:61] "metrics-server-5c6f97fb75-qwvtt" [6f1f27bb-dc60-477b-9476-b02a8d1c7b00] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:52:39.508473   32787 system_pods.go:61] "storage-provisioner" [bfbcaa76-3903-4a2c-9081-426d2c26ec38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:52:39.508483   32787 system_pods.go:74] duration metric: took 11.189474ms to wait for pod list to return data ...
	I0801 17:52:39.508489   32787 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:52:39.513490   32787 default_sa.go:45] found service account: "default"
	I0801 17:52:39.513508   32787 default_sa.go:55] duration metric: took 5.01286ms for default service account to be created ...
	I0801 17:52:39.513520   32787 kubeadm.go:572] duration metric: took 480.525733ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0801 17:52:39.513574   32787 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:52:39.519363   32787 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:52:39.519381   32787 node_conditions.go:123] node cpu capacity is 6
	I0801 17:52:39.519391   32787 node_conditions.go:105] duration metric: took 5.809738ms to run NodePressure ...
	I0801 17:52:39.519419   32787 start.go:216] waiting for startup goroutines ...
	I0801 17:52:39.586557   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.587672   32787 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:52:39.587683   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:52:39.587736   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.588232   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.590309   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.669406   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.720677   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:52:39.720697   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:52:39.723721   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:52:39.723735   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:52:39.732299   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:52:39.803502   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:52:39.803524   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:52:39.807057   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:52:39.807074   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:52:39.824461   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:52:39.824576   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:52:39.824587   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:52:39.831720   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:52:39.831736   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:52:39.908512   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:52:39.914142   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:52:39.914183   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:52:39.936302   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:52:39.936320   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:52:40.028581   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:52:40.028597   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:52:40.130449   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:52:40.130463   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:52:40.215708   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:52:40.215722   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:52:40.232043   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:52:40.232058   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:52:40.250218   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:52:40.829184   32787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096813176s)
	I0801 17:52:40.829230   32787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.004700844s)
	I0801 17:52:40.843288   32787 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220801175129-13911"
	I0801 17:52:40.938463   32787 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:52:40.975473   32787 addons.go:414] enableAddons completed in 1.942433849s
	I0801 17:52:41.005347   32787 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:52:41.027519   32787 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220801175129-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:33:03 UTC, end at Tue 2022-08-02 01:00:00 UTC. --
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.047508449Z" level=info msg="Processing signal 'terminated'"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.048554008Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.049066697Z" level=info msg="Daemon shutdown complete"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[130]: time="2022-08-02T00:33:06.049140956Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: docker.service: Succeeded.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Stopped Docker Application Container Engine.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Starting Docker Application Container Engine...
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.103993889Z" level=info msg="Starting up"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107258175Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107331231Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107364819Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.107377776Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108456849Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108470092Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108484226Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.108493814Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.111425754Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.115111191Z" level=info msg="Loading containers: start."
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.188779913Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.218225237Z" level=info msg="Loading containers: done."
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.226251934Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.226311143Z" level=info msg="Daemon has completed initialization"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 systemd[1]: Started Docker Application Container Engine.
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.252520264Z" level=info msg="API listen on [::]:2376"
	Aug 02 00:33:06 old-k8s-version-20220801172716-13911 dockerd[427]: time="2022-08-02T00:33:06.256100929Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-08-02T01:00:03Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  01:00:03 up  1:25,  0 users,  load average: 0.49, 0.64, 0.77
	Linux old-k8s-version-20220801172716-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:33:03 UTC, end at Tue 2022-08-02 01:00:03 UTC. --
	Aug 02 01:00:01 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 kubelet[34222]: I0802 01:00:02.465803   34222 server.go:410] Version: v1.16.0
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 kubelet[34222]: I0802 01:00:02.465918   34222 plugins.go:100] No cloud provider specified.
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 kubelet[34222]: I0802 01:00:02.465926   34222 server.go:773] Client rotation is on, will bootstrap in background
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 kubelet[34222]: I0802 01:00:02.467511   34222 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 kubelet[34222]: W0802 01:00:02.468156   34222 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 kubelet[34222]: W0802 01:00:02.468221   34222 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 kubelet[34222]: F0802 01:00:02.468243   34222 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 01:00:02 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 kubelet[34242]: I0802 01:00:03.229552   34242 server.go:410] Version: v1.16.0
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 kubelet[34242]: I0802 01:00:03.229755   34242 plugins.go:100] No cloud provider specified.
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 kubelet[34242]: I0802 01:00:03.229765   34242 server.go:773] Client rotation is on, will bootstrap in background
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 kubelet[34242]: I0802 01:00:03.231393   34242 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 kubelet[34242]: W0802 01:00:03.232025   34242 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 kubelet[34242]: W0802 01:00:03.232086   34242 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 kubelet[34242]: F0802 01:00:03.232109   34242 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 01:00:03 old-k8s-version-20220801172716-13911 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 18:00:03.244807   33517 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 2 (430.079998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220801172716-13911" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (48.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220801175129-13911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911: exit status 2 (16.101662698s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911: exit status 2 (16.106052063s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220801175129-13911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220801175129-13911
helpers_test.go:235: (dbg) docker inspect newest-cni-20220801175129-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238",
	        "Created": "2022-08-02T00:51:36.137976042Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316296,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:52:24.903576593Z",
	            "FinishedAt": "2022-08-02T00:52:22.987858752Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238/hostname",
	        "HostsPath": "/var/lib/docker/containers/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238/hosts",
	        "LogPath": "/var/lib/docker/containers/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238-json.log",
	        "Name": "/newest-cni-20220801175129-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20220801175129-13911:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220801175129-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e82e7385558f16e0b3468f1017b5759826988a63794b16cf4a23999d15d07831-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e82e7385558f16e0b3468f1017b5759826988a63794b16cf4a23999d15d07831/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e82e7385558f16e0b3468f1017b5759826988a63794b16cf4a23999d15d07831/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e82e7385558f16e0b3468f1017b5759826988a63794b16cf4a23999d15d07831/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220801175129-13911",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220801175129-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220801175129-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220801175129-13911",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220801175129-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74dcba454ad55feae7c3d27289ba99cf4223d973f07a3a68f95ce835b9c5bf74",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52997"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52998"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52999"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53000"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/74dcba454ad5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220801175129-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "975a147da1dc",
	                        "newest-cni-20220801175129-13911"
	                    ],
	                    "NetworkID": "ae6cf09ebf463df749fa44ed1b4f2989ff992fc2a5100c17f88bd79c2165a910",
	                    "EndpointID": "c45d16379854d6eb795ba6c339d0a7fd5fac8d8c29c5077f13d38cd6b586af10",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220801175129-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220801175129-13911 logs -n 25: (4.020569591s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:42 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220801175129-13911 --memory=2200           | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:52 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220801175129-13911 --memory=2200           | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:53 PDT | 01 Aug 22 17:53 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:52:23
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:52:23.673228   32787 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:52:23.673420   32787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:52:23.673426   32787 out.go:309] Setting ErrFile to fd 2...
	I0801 17:52:23.673430   32787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:52:23.673533   32787 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:52:23.673982   32787 out.go:303] Setting JSON to false
	I0801 17:52:23.688935   32787 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10314,"bootTime":1659391229,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:52:23.689050   32787 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:52:23.710547   32787 out.go:177] * [newest-cni-20220801175129-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:52:23.732744   32787 notify.go:193] Checking for updates...
	I0801 17:52:23.754303   32787 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:52:23.776387   32787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:23.797749   32787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:52:23.819483   32787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:52:23.841446   32787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:52:23.863222   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:23.863894   32787 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:52:23.933038   32787 docker.go:137] docker version: linux-20.10.17
	I0801 17:52:23.933180   32787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:52:24.067710   32787 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:52:24.000385977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:52:24.109876   32787 out.go:177] * Using the docker driver based on existing profile
	I0801 17:52:24.130849   32787 start.go:284] selected driver: docker
	I0801 17:52:24.130895   32787 start.go:808] validating driver "docker" against &{Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:24.131069   32787 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:52:24.134420   32787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:52:24.267922   32787 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:52:24.202408606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:52:24.268091   32787 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0801 17:52:24.268108   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:24.268117   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:24.268125   32787 start_flags.go:310] config:
	{Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:24.289987   32787 out.go:177] * Starting control plane node newest-cni-20220801175129-13911 in cluster newest-cni-20220801175129-13911
	I0801 17:52:24.311931   32787 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:52:24.333756   32787 out.go:177] * Pulling base image ...
	I0801 17:52:24.375960   32787 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:52:24.375962   32787 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:52:24.376098   32787 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 17:52:24.376118   32787 cache.go:57] Caching tarball of preloaded images
	I0801 17:52:24.376312   32787 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:52:24.377040   32787 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0801 17:52:24.379371   32787 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/config.json ...
	I0801 17:52:24.441569   32787 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:52:24.441586   32787 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:52:24.441597   32787 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:52:24.441636   32787 start.go:371] acquiring machines lock for newest-cni-20220801175129-13911: {Name:mk442d39e1f1a32a0afed4f835844094a50c76c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:52:24.441709   32787 start.go:375] acquired machines lock for "newest-cni-20220801175129-13911" in 57.497µs
	I0801 17:52:24.441728   32787 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:52:24.441736   32787 fix.go:55] fixHost starting: 
	I0801 17:52:24.441948   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:24.509430   32787 fix.go:103] recreateIfNeeded on newest-cni-20220801175129-13911: state=Stopped err=<nil>
	W0801 17:52:24.509459   32787 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:52:24.531450   32787 out.go:177] * Restarting existing docker container for "newest-cni-20220801175129-13911" ...
	I0801 17:52:24.553225   32787 cli_runner.go:164] Run: docker start newest-cni-20220801175129-13911
	I0801 17:52:24.902900   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:24.976389   32787 kic.go:415] container "newest-cni-20220801175129-13911" state is running.
	I0801 17:52:24.977146   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:25.050538   32787 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/config.json ...
	I0801 17:52:25.050942   32787 machine.go:88] provisioning docker machine ...
	I0801 17:52:25.050970   32787 ubuntu.go:169] provisioning hostname "newest-cni-20220801175129-13911"
	I0801 17:52:25.051043   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.126021   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.126221   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.126238   32787 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220801175129-13911 && echo "newest-cni-20220801175129-13911" | sudo tee /etc/hostname
	I0801 17:52:25.248738   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220801175129-13911
	
	I0801 17:52:25.248824   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.321597   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.321746   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.321766   32787 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220801175129-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220801175129-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220801175129-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:52:25.434932   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:52:25.434954   32787 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:52:25.434989   32787 ubuntu.go:177] setting up certificates
	I0801 17:52:25.435000   32787 provision.go:83] configureAuth start
	I0801 17:52:25.435069   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:25.507735   32787 provision.go:138] copyHostCerts
	I0801 17:52:25.507826   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:52:25.507858   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:52:25.507959   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:52:25.508136   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:52:25.508145   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:52:25.508210   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:52:25.508393   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:52:25.508399   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:52:25.508476   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:52:25.508593   32787 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220801175129-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220801175129-13911]
	I0801 17:52:25.689638   32787 provision.go:172] copyRemoteCerts
	I0801 17:52:25.689698   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:52:25.689743   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.760221   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:25.842816   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:52:25.859614   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0801 17:52:25.877787   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 17:52:25.894499   32787 provision.go:86] duration metric: configureAuth took 459.438174ms
	I0801 17:52:25.894511   32787 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:52:25.894655   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:25.894705   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.965303   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.965461   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.965476   32787 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:52:26.081993   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:52:26.082006   32787 ubuntu.go:71] root file system type: overlay
	I0801 17:52:26.082200   32787 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:52:26.082279   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.152899   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:26.153058   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:26.153124   32787 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:52:26.273565   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:52:26.273643   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.344500   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:26.344674   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:26.344687   32787 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:52:26.461478   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:52:26.461494   32787 machine.go:91] provisioned docker machine in 1.410404064s
	I0801 17:52:26.461511   32787 start.go:307] post-start starting for "newest-cni-20220801175129-13911" (driver="docker")
	I0801 17:52:26.461517   32787 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:52:26.461594   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:52:26.461638   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.533240   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.617860   32787 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:52:26.621187   32787 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:52:26.621203   32787 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:52:26.621209   32787 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:52:26.621218   32787 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:52:26.621226   32787 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:52:26.621342   32787 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:52:26.621472   32787 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:52:26.621619   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:52:26.628457   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:52:26.645485   32787 start.go:310] post-start completed in 183.946981ms
	I0801 17:52:26.645554   32787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:52:26.645614   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.716259   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.799945   32787 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:52:26.804224   32787 fix.go:57] fixHost completed within 2.362249069s
	I0801 17:52:26.804235   32787 start.go:82] releasing machines lock for "newest-cni-20220801175129-13911", held for 2.362281129s
	I0801 17:52:26.804320   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:26.873689   32787 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:52:26.873695   32787 ssh_runner.go:195] Run: systemctl --version
	I0801 17:52:26.873749   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.873757   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.947817   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.950793   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:27.027323   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0801 17:52:27.218560   32787 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0801 17:52:27.231317   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:27.308327   32787 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0801 17:52:27.385719   32787 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:52:27.395516   32787 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:52:27.395572   32787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:52:27.404611   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:52:27.416899   32787 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:52:27.490191   32787 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:52:27.556237   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:27.626123   32787 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:52:27.863853   32787 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:52:27.937016   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:28.004508   32787 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:52:28.013670   32787 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:52:28.013735   32787 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:52:28.017472   32787 start.go:471] Will wait 60s for crictl version
	I0801 17:52:28.017516   32787 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:52:28.045591   32787 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:52:28.045657   32787 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:52:28.081103   32787 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:52:28.139569   32787 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:52:28.139751   32787 cli_runner.go:164] Run: docker exec -t newest-cni-20220801175129-13911 dig +short host.docker.internal
	I0801 17:52:28.271354   32787 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:52:28.271611   32787 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:52:28.276062   32787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:52:28.285339   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:28.377645   32787 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0801 17:52:28.400051   32787 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:52:28.400188   32787 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:52:28.430349   32787 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0801 17:52:28.430364   32787 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:52:28.430425   32787 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:52:28.459887   32787 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0801 17:52:28.459907   32787 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:52:28.459999   32787 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:52:28.538859   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:28.538872   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:28.538886   32787 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0801 17:52:28.538897   32787 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220801175129-13911 NodeName:newest-cni-20220801175129-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:52:28.539006   32787 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220801175129-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:52:28.539092   32787 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220801175129-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:52:28.539154   32787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:52:28.547103   32787 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:52:28.547163   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:52:28.554183   32787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0801 17:52:28.566838   32787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:52:28.579088   32787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0801 17:52:28.592069   32787 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:52:28.595735   32787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:52:28.605024   32787 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911 for IP: 192.168.67.2
	I0801 17:52:28.605135   32787 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:52:28.605189   32787 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:52:28.605266   32787 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/client.key
	I0801 17:52:28.605323   32787 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.key.c7fa3a9e
	I0801 17:52:28.605376   32787 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.key
	I0801 17:52:28.606246   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:52:28.606339   32787 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:52:28.606357   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:52:28.606485   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:52:28.606564   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:52:28.606614   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:52:28.606880   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:52:28.607387   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:52:28.624412   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 17:52:28.641432   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:52:28.657666   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:52:28.674229   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:52:28.719270   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:52:28.736654   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:52:28.753158   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:52:28.770469   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:52:28.787048   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:52:28.803970   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:52:28.821159   32787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:52:28.833579   32787 ssh_runner.go:195] Run: openssl version
	I0801 17:52:28.839216   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:52:28.846925   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.850785   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.850830   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.855816   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:52:28.862898   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:52:28.870230   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.874033   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.874072   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.879145   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:52:28.886176   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:52:28.893778   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.897545   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.897586   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.902789   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:52:28.910029   32787 kubeadm.go:395] StartCluster: {Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:28.910133   32787 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:52:28.938635   32787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:52:28.946148   32787 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:52:28.946164   32787 kubeadm.go:626] restartCluster start
	I0801 17:52:28.946212   32787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:52:28.953014   32787 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:28.953076   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:29.024288   32787 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220801175129-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:29.024479   32787 kubeconfig.go:127] "newest-cni-20220801175129-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:52:29.024804   32787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:29.026040   32787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:52:29.033882   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.033956   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.042444   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.243382   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.243445   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.252434   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.444705   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.444803   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.455884   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.644689   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.644878   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.656009   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.844691   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.844898   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.855271   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.044484   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.044572   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.054992   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.244551   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.244703   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.256059   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.443337   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.443520   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.454029   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.642687   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.642858   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.653207   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.842691   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.842787   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.851677   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.044755   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.044936   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.056082   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.244798   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.244988   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.255847   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.442880   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.442988   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.453367   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.642749   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.642915   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.652975   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.844827   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.845046   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.855350   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.043498   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:32.043651   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:32.053903   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.053914   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:32.053961   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:32.061661   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.061673   32787 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:52:32.061681   32787 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:52:32.061735   32787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:52:32.091970   32787 docker.go:443] Stopping containers: [504a8b59e1ce 6ada3ab8487d e7ffafb0ce3f 46e43480cef2 28e90bf32a64 6686f00cb0ec 0da10eabf430 9e2b4b1800e1 ed072705134c ae7511f543c8 af02fe8a2673 42d0d44d7c6f d698d4a20553 06a54abbd12b aeff65b18cdf d9acb50e1a8c]
	I0801 17:52:32.092047   32787 ssh_runner.go:195] Run: docker stop 504a8b59e1ce 6ada3ab8487d e7ffafb0ce3f 46e43480cef2 28e90bf32a64 6686f00cb0ec 0da10eabf430 9e2b4b1800e1 ed072705134c ae7511f543c8 af02fe8a2673 42d0d44d7c6f d698d4a20553 06a54abbd12b aeff65b18cdf d9acb50e1a8c
	I0801 17:52:32.121345   32787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:52:32.131474   32787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:52:32.139435   32787 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug  2 00:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:51 /etc/kubernetes/scheduler.conf
	
	I0801 17:52:32.139495   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:52:32.146996   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:52:32.154372   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:52:32.161557   32787 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.161606   32787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:52:32.168595   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:52:32.175658   32787 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.175708   32787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:52:32.182506   32787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:52:32.190951   32787 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:52:32.190967   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:32.240435   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:32.997584   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.179336   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.228816   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.285208   32787 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:52:33.285269   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:33.818695   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:34.318201   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:34.329612   32787 api_server.go:71] duration metric: took 1.044336555s to wait for apiserver process to appear ...
	I0801 17:52:34.329626   32787 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:52:34.329634   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:34.330719   32787 api_server.go:256] stopped: https://127.0.0.1:53000/healthz: Get "https://127.0.0.1:53000/healthz": EOF
	I0801 17:52:34.831850   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:37.477511   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:52:37.477526   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:52:37.831072   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:37.839203   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:52:37.839219   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:52:38.331454   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:38.338440   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:52:38.338453   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:52:38.831673   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:38.839175   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 200:
	ok
	I0801 17:52:38.845486   32787 api_server.go:140] control plane version: v1.24.3
	I0801 17:52:38.845498   32787 api_server.go:130] duration metric: took 4.515618355s to wait for apiserver health ...
	I0801 17:52:38.845504   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:38.845508   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:38.845520   32787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:52:38.851916   32787 system_pods.go:59] 8 kube-system pods found
	I0801 17:52:38.851934   32787 system_pods.go:61] "coredns-6d4b75cb6d-cs7mc" [c15c9885-12b6-401a-80b5-306326ed8760] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0801 17:52:38.851947   32787 system_pods.go:61] "etcd-newest-cni-20220801175129-13911" [6c0faf34-6ed0-45fb-8af0-d822ee539d57] Running
	I0801 17:52:38.851952   32787 system_pods.go:61] "kube-apiserver-newest-cni-20220801175129-13911" [faf7abbe-9d33-4c77-89e7-5ee799592377] Running
	I0801 17:52:38.851956   32787 system_pods.go:61] "kube-controller-manager-newest-cni-20220801175129-13911" [eb59c99e-98e9-44e8-bf4c-d8237aaa34ab] Running
	I0801 17:52:38.851961   32787 system_pods.go:61] "kube-proxy-2pmw7" [b621ae1b-52fc-4d15-b7bd-b6b9d074d419] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:52:38.851966   32787 system_pods.go:61] "kube-scheduler-newest-cni-20220801175129-13911" [c70c5eb8-13e4-400c-aa52-2a94e85f0c5e] Running
	I0801 17:52:38.851970   32787 system_pods.go:61] "metrics-server-5c6f97fb75-qwvtt" [6f1f27bb-dc60-477b-9476-b02a8d1c7b00] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:52:38.851975   32787 system_pods.go:61] "storage-provisioner" [bfbcaa76-3903-4a2c-9081-426d2c26ec38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:52:38.851979   32787 system_pods.go:74] duration metric: took 6.454438ms to wait for pod list to return data ...
	I0801 17:52:38.851985   32787 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:52:38.854828   32787 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:52:38.854841   32787 node_conditions.go:123] node cpu capacity is 6
	I0801 17:52:38.854849   32787 node_conditions.go:105] duration metric: took 2.86028ms to run NodePressure ...
	I0801 17:52:38.854858   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:39.017659   32787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:52:39.028040   32787 ops.go:34] apiserver oom_adj: -16
	I0801 17:52:39.028053   32787 kubeadm.go:630] restartCluster took 10.081237328s
	I0801 17:52:39.028063   32787 kubeadm.go:397] StartCluster complete in 10.117389461s
	I0801 17:52:39.028076   32787 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:39.028149   32787 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:39.028753   32787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:39.032857   32787 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220801175129-13911" rescaled to 1
	I0801 17:52:39.032920   32787 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:52:39.032937   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:52:39.033003   32787 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:52:39.033212   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:39.058242   32787 out.go:177] * Verifying Kubernetes components...
	I0801 17:52:39.058371   32787 addons.go:65] Setting dashboard=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.094923   32787 addons.go:153] Setting addon dashboard=true in "newest-cni-20220801175129-13911"
	I0801 17:52:39.058373   32787 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.094932   32787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0801 17:52:39.094946   32787 addons.go:162] addon dashboard should already be in state true
	I0801 17:52:39.094970   32787 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220801175129-13911"
	W0801 17:52:39.094990   32787 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:52:39.058371   32787 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.095021   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.058385   32787 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.095060   32787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220801175129-13911"
	I0801 17:52:39.095084   32787 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220801175129-13911"
	W0801 17:52:39.095103   32787 addons.go:162] addon metrics-server should already be in state true
	I0801 17:52:39.095109   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.095175   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.095545   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.095796   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.097115   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.097117   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.146335   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.146341   32787 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0801 17:52:39.235029   32787 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220801175129-13911"
	I0801 17:52:39.251541   32787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:52:39.271211   32787 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	W0801 17:52:39.271269   32787 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:52:39.308909   32787 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:52:39.330553   32787 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 17:52:39.330575   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:52:39.330621   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.406629   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:52:39.427343   32787 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:52:39.427370   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:52:39.407092   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.465342   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:52:39.465354   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:52:39.406765   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.427563   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.460245   32787 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:52:39.465420   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.465467   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:39.484866   32787 api_server.go:71] duration metric: took 451.841665ms to wait for apiserver process to appear ...
	I0801 17:52:39.484924   32787 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:52:39.484971   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:39.494690   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 200:
	ok
	I0801 17:52:39.497264   32787 api_server.go:140] control plane version: v1.24.3
	I0801 17:52:39.497280   32787 api_server.go:130] duration metric: took 12.34487ms to wait for apiserver health ...
	I0801 17:52:39.497288   32787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:52:39.508373   32787 system_pods.go:59] 8 kube-system pods found
	I0801 17:52:39.508410   32787 system_pods.go:61] "coredns-6d4b75cb6d-cs7mc" [c15c9885-12b6-401a-80b5-306326ed8760] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0801 17:52:39.508419   32787 system_pods.go:61] "etcd-newest-cni-20220801175129-13911" [6c0faf34-6ed0-45fb-8af0-d822ee539d57] Running
	I0801 17:52:39.508427   32787 system_pods.go:61] "kube-apiserver-newest-cni-20220801175129-13911" [faf7abbe-9d33-4c77-89e7-5ee799592377] Running
	I0801 17:52:39.508439   32787 system_pods.go:61] "kube-controller-manager-newest-cni-20220801175129-13911" [eb59c99e-98e9-44e8-bf4c-d8237aaa34ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0801 17:52:39.508448   32787 system_pods.go:61] "kube-proxy-2pmw7" [b621ae1b-52fc-4d15-b7bd-b6b9d074d419] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:52:39.508455   32787 system_pods.go:61] "kube-scheduler-newest-cni-20220801175129-13911" [c70c5eb8-13e4-400c-aa52-2a94e85f0c5e] Running
	I0801 17:52:39.508464   32787 system_pods.go:61] "metrics-server-5c6f97fb75-qwvtt" [6f1f27bb-dc60-477b-9476-b02a8d1c7b00] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:52:39.508473   32787 system_pods.go:61] "storage-provisioner" [bfbcaa76-3903-4a2c-9081-426d2c26ec38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:52:39.508483   32787 system_pods.go:74] duration metric: took 11.189474ms to wait for pod list to return data ...
	I0801 17:52:39.508489   32787 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:52:39.513490   32787 default_sa.go:45] found service account: "default"
	I0801 17:52:39.513508   32787 default_sa.go:55] duration metric: took 5.01286ms for default service account to be created ...
	I0801 17:52:39.513520   32787 kubeadm.go:572] duration metric: took 480.525733ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0801 17:52:39.513574   32787 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:52:39.519363   32787 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:52:39.519381   32787 node_conditions.go:123] node cpu capacity is 6
	I0801 17:52:39.519391   32787 node_conditions.go:105] duration metric: took 5.809738ms to run NodePressure ...
	I0801 17:52:39.519419   32787 start.go:216] waiting for startup goroutines ...
	I0801 17:52:39.586557   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.587672   32787 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:52:39.587683   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:52:39.587736   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.588232   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.590309   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.669406   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.720677   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:52:39.720697   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:52:39.723721   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:52:39.723735   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:52:39.732299   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:52:39.803502   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:52:39.803524   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:52:39.807057   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:52:39.807074   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:52:39.824461   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:52:39.824576   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:52:39.824587   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:52:39.831720   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:52:39.831736   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:52:39.908512   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:52:39.914142   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:52:39.914183   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:52:39.936302   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:52:39.936320   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:52:40.028581   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:52:40.028597   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:52:40.130449   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:52:40.130463   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:52:40.215708   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:52:40.215722   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:52:40.232043   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:52:40.232058   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:52:40.250218   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:52:40.829184   32787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096813176s)
	I0801 17:52:40.829230   32787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.004700844s)
	I0801 17:52:40.843288   32787 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220801175129-13911"
	I0801 17:52:40.938463   32787 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:52:40.975473   32787 addons.go:414] enableAddons completed in 1.942433849s
	I0801 17:52:41.005347   32787 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:52:41.027519   32787 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220801175129-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:52:25 UTC, end at Tue 2022-08-02 00:53:18 UTC. --
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.698716417Z" level=info msg="Starting up"
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.700446865Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.700478956Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.700501383Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.700516384Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.701564324Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.701671187Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.701724452Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.701793482Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.705385014Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.712827685Z" level=info msg="Loading containers: start."
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.813852910Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.848115938Z" level=info msg="Loading containers: done."
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.856515700Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.856579228Z" level=info msg="Daemon has completed initialization"
	Aug 02 00:52:27 newest-cni-20220801175129-13911 systemd[1]: Started Docker Application Container Engine.
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.876738013Z" level=info msg="API listen on [::]:2376"
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.882830943Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 02 00:52:39 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:39.708755992Z" level=info msg="ignoring event" container=c993c9155a2e96045557d445db4a5acf7f0f83e87e4170c02114731d2230f6d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:40 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:40.249572703Z" level=info msg="ignoring event" container=b0932aedb371472e128b715087244d6cd2e834d94a93d47b71529d56fd99a1e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:41 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:41.570436507Z" level=info msg="ignoring event" container=985847ae7a12736848a800899d12c3126170598d451834980676f98a26fdcf79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:41 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:41.579243072Z" level=info msg="ignoring event" container=b43c3fd1863143f2c0fcb438ed79bed2615fda00b4e4caf1ea1eaa60851d1ceb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:42 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:42.541670478Z" level=info msg="ignoring event" container=5e9e6be0892b9d81209c89019f7b4a9b9fd7d06bda8a56c03d1171a1595849a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:42 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:42.681775388Z" level=info msg="ignoring event" container=a70c7db0f9385f277d5e5ae2fdc2f3040458a4cf6a4e4454b0ac7f5419c7c833 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:15 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:15.709130914Z" level=info msg="ignoring event" container=c8176b7cf5aecc2f604f38ccecc9238511716f438707cc65ce84b44ce9625151 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a07413c01f8e5       2ae1ba6417cbc       39 seconds ago       Running             kube-proxy                1                   c775307791fd6
	c8176b7cf5aec       6e38f40d628db       39 seconds ago       Exited              storage-provisioner       1                   d5f23cf2891c8
	3d09f870ce6cd       3a5aa3a515f5d       44 seconds ago       Running             kube-scheduler            1                   613a2136042e2
	ae860e15d0e41       586c112956dfc       44 seconds ago       Running             kube-controller-manager   1                   c8b6e150dfcfc
	9d751a265b494       d521dd763e2e3       44 seconds ago       Running             kube-apiserver            1                   f4cdbd59ab656
	8b0f55d802b01       aebe758cef4cd       44 seconds ago       Running             etcd                      1                   ca715a41b73c4
	0da10eabf430a       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                0                   9e2b4b1800e13
	ed072705134c0       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   ae7511f543c84
	af02fe8a26739       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            0                   06a54abbd12b8
	42d0d44d7c6f1       586c112956dfc       About a minute ago   Exited              kube-controller-manager   0                   d9acb50e1a8c4
	d698d4a205537       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            0                   aeff65b18cdf6
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220801175129-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220801175129-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=newest-cni-20220801175129-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_51_55_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:51:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220801175129-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:53:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:53:16 +0000   Tue, 02 Aug 2022 00:51:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:53:16 +0000   Tue, 02 Aug 2022 00:51:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:53:16 +0000   Tue, 02 Aug 2022 00:51:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Aug 2022 00:53:16 +0000   Tue, 02 Aug 2022 00:53:16 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    newest-cni-20220801175129-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                73c12afa-3566-4b51-b1a4-de54f0cd6723
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-cs7mc                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     70s
	  kube-system                 etcd-newest-cni-20220801175129-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                 kube-apiserver-newest-cni-20220801175129-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-newest-cni-20220801175129-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-2pmw7                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-newest-cni-20220801175129-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 metrics-server-5c6f97fb75-qwvtt                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  Starting                 70s                kube-proxy       
	  Normal  NodeHasSufficientMemory  94s (x5 over 94s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x5 over 94s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x4 over 94s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s                kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s                kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s                kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientPID
	  Normal  NodeReady                83s                kubelet          Node newest-cni-20220801175129-13911 status is now: NodeReady
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           72s                node-controller  Node newest-cni-20220801175129-13911 event: Registered Node newest-cni-20220801175129-13911 in Controller
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientPID
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-20220801175129-13911 event: Registered Node newest-cni-20220801175129-13911 in Controller
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s                 kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet          Node newest-cni-20220801175129-13911 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [8b0f55d802b0] <==
	* {"level":"info","ts":"2022-08-02T00:52:34.356Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-08-02T00:52:34.357Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:52:34.357Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:52:34.357Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:52:34.357Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:52:35.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-08-02T00:52:35.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-08-02T00:52:35.552Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220801175129-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:52:35.551Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:52:35.552Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:52:35.553Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:52:35.556Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:52:35.557Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:52:35.557Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [ed072705134c] <==
	* {"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220801175129-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:51:50.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:51:50.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:51:50.249Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:51:50.250Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:52:11.085Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-08-02T00:52:11.085Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220801175129-13911","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/08/02 00:52:11 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/08/02 00:52:11 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-08-02T00:52:11.130Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-08-02T00:52:11.132Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:52:11.134Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:52:11.134Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220801175129-13911","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  00:53:19 up  1:18,  0 users,  load average: 0.94, 0.84, 0.85
	Linux newest-cni-20220801175129-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9d751a265b49] <==
	* I0802 00:52:37.550529       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 00:52:37.560051       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 00:52:37.573442       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0802 00:52:37.605246       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:52:38.229308       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0802 00:52:38.449501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0802 00:52:38.564266       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:52:38.564302       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:52:38.564308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:52:38.564338       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:52:38.564362       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:52:38.565359       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0802 00:52:38.924952       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:52:38.932078       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:52:38.960564       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:52:39.003450       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 00:52:39.008152       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 00:52:40.313661       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:52:40.809737       1 controller.go:611] quota admission added evaluator for: namespaces
	I0802 00:52:40.883865       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.43.5]
	I0802 00:52:40.908312       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.101.179.10]
	I0802 00:53:16.126001       1 controller.go:611] quota admission added evaluator for: endpoints
	I0802 00:53:16.326140       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 00:53:16.489793       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [d698d4a20553] <==
	* W0802 00:52:20.458798       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.468152       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.480173       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.499903       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.506848       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.520035       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.536902       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.583631       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.585379       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.616521       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.622229       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.654284       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.683893       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.685741       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.711923       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.737549       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.768670       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.803625       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.805552       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.810970       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.831948       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.939672       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.961435       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:21.019256       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:21.046106       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [42d0d44d7c6f] <==
	* I0802 00:52:07.089965       1 shared_informer.go:262] Caches are synced for deployment
	I0802 00:52:07.110480       1 shared_informer.go:262] Caches are synced for daemon sets
	I0802 00:52:07.128964       1 shared_informer.go:262] Caches are synced for stateful set
	I0802 00:52:07.130129       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0802 00:52:07.131809       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0802 00:52:07.131829       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0802 00:52:07.133044       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0802 00:52:07.149093       1 shared_informer.go:262] Caches are synced for service account
	I0802 00:52:07.178228       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0802 00:52:07.240783       1 shared_informer.go:262] Caches are synced for namespace
	I0802 00:52:07.244822       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:52:07.291938       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:52:07.647190       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:52:07.725554       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0802 00:52:07.731807       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:52:07.731836       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0802 00:52:07.777549       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2pmw7"
	I0802 00:52:08.039729       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-cs7mc"
	I0802 00:52:08.043289       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-mzbjk"
	I0802 00:52:08.183585       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0802 00:52:08.187043       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-mzbjk"
	I0802 00:52:10.476104       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0802 00:52:10.479360       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0802 00:52:10.486241       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0802 00:52:10.491527       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-qwvtt"
	
	* 
	* ==> kube-controller-manager [ae860e15d0e4] <==
	* I0802 00:53:16.103724       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0802 00:53:16.106194       1 shared_informer.go:262] Caches are synced for cronjob
	I0802 00:53:16.109696       1 shared_informer.go:262] Caches are synced for ephemeral
	I0802 00:53:16.110820       1 shared_informer.go:262] Caches are synced for HPA
	I0802 00:53:16.128630       1 shared_informer.go:262] Caches are synced for crt configmap
	I0802 00:53:16.199642       1 shared_informer.go:262] Caches are synced for taint
	I0802 00:53:16.199809       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0802 00:53:16.199914       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220801175129-13911. Assuming now as a timestamp.
	I0802 00:53:16.199967       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0802 00:53:16.200272       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0802 00:53:16.200438       1 event.go:294] "Event occurred" object="newest-cni-20220801175129-13911" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220801175129-13911 event: Registered Node newest-cni-20220801175129-13911 in Controller"
	I0802 00:53:16.213212       1 shared_informer.go:262] Caches are synced for daemon sets
	I0802 00:53:16.221775       1 shared_informer.go:262] Caches are synced for attach detach
	I0802 00:53:16.292970       1 shared_informer.go:262] Caches are synced for disruption
	I0802 00:53:16.293004       1 disruption.go:371] Sending events to api server.
	I0802 00:53:16.297716       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:53:16.308535       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:53:16.330141       1 shared_informer.go:262] Caches are synced for deployment
	I0802 00:53:16.492166       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0802 00:53:16.494871       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0802 00:53:16.598301       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-k7hpg"
	I0802 00:53:16.601256       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-h9lkj"
	I0802 00:53:16.722889       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:53:16.734206       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:53:16.734242       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [0da10eabf430] <==
	* I0802 00:52:08.378446       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:52:08.378587       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:52:08.379581       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:52:08.401386       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:52:08.401426       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:52:08.401433       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:52:08.401443       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:52:08.401525       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:52:08.401707       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:52:08.401907       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:52:08.401936       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:52:08.402361       1 config.go:317] "Starting service config controller"
	I0802 00:52:08.402388       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:52:08.402549       1 config.go:444] "Starting node config controller"
	I0802 00:52:08.402575       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:52:08.402576       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:52:08.402583       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:52:08.503686       1 shared_informer.go:262] Caches are synced for node config
	I0802 00:52:08.520005       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:52:08.520116       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [a07413c01f8e] <==
	* I0802 00:52:40.232914       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:52:40.232984       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:52:40.233007       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:52:40.311130       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:52:40.311169       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:52:40.311177       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:52:40.311186       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:52:40.311211       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:52:40.311319       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:52:40.311433       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:52:40.311440       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:52:40.312043       1 config.go:317] "Starting service config controller"
	I0802 00:52:40.312125       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:52:40.312142       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:52:40.312145       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:52:40.312793       1 config.go:444] "Starting node config controller"
	I0802 00:52:40.312802       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:52:40.412494       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 00:52:40.412542       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:52:40.412888       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3d09f870ce6c] <==
	* W0802 00:52:34.439289       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0802 00:52:34.934421       1 serving.go:348] Generated self-signed cert in-memory
	W0802 00:52:37.486093       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 00:52:37.486130       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0802 00:52:37.486153       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 00:52:37.486157       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 00:52:37.513029       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0802 00:52:37.513806       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:52:37.515238       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0802 00:52:37.515681       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 00:52:37.515824       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:52:37.515938       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 00:52:37.616689       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [af02fe8a2673] <==
	* W0802 00:51:52.144921       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 00:51:52.144929       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 00:51:52.144935       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 00:51:52.144944       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 00:51:52.144979       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 00:51:52.145098       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 00:51:52.145124       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 00:51:52.145207       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 00:51:52.145223       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 00:51:53.011992       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 00:51:53.012043       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 00:51:53.032116       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 00:51:53.032186       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 00:51:53.114775       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:51:53.114813       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 00:51:53.131778       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 00:51:53.131797       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 00:51:53.151647       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 00:51:53.151684       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 00:51:53.343195       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:51:53.343234       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0802 00:51:56.238565       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:52:11.130458       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0802 00:52:11.130815       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0802 00:52:11.130979       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:52:25 UTC, end at Tue 2022-08-02 00:53:21 UTC. --
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         Try `iptables -h' or 'iptables --help' for more information.
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         ]
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:  > pod="kube-system/metrics-server-5c6f97fb75-qwvtt"
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:20.564477    3486 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c6f97fb75-qwvtt_kube-system(6f1f27bb-dc60-477b-9476-b02a8d1c7b00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c6f97fb75-qwvtt_kube-system(6f1f27bb-dc60-477b-9476-b02a8d1c7b00)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"984184e251a2b846d54ad46b5de058e027126b0122e5054b5f6f1fc6e375e194\\\" network for pod \\\"metrics-server-5c6f97fb75-qwvtt\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c6f97fb75-qwvtt_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"984184e251a2b846d54ad46b5de058e027126b0122e5054b5f6f1fc6e375e194\\\" network for pod \\\"metrics-server-5c6f97fb75-qwvtt\\\": networkPlugin cni failed to teardown p
od \\\"metrics-server-5c6f97fb75-qwvtt_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-f0cdc7ea0a5961b459cf2754 -m comment --comment name: \\\"crio\\\" id: \\\"984184e251a2b846d54ad46b5de058e027126b0122e5054b5f6f1fc6e375e194\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f0cdc7ea0a5961b459cf2754':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c6f97fb75-qwvtt" podUID=6f1f27bb-dc60-477b-9476-b02a8d1c7b00
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:20.794229    3486 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         rpc error: code = Unknown desc = [failed to set up sandbox container "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" network for pod "coredns-6d4b75cb6d-cs7mc": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-cs7mc_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" network for pod "coredns-6d4b75cb6d-cs7mc": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-cs7mc_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-c72d849db4d2a0ebafe04821 -m comment --comment name: "crio" id: "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c72d849db4d2a0ebafe04821':No such file or directory
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         Try `iptables -h' or 'iptables --help' for more information.
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         ]
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:  >
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:20.794289    3486 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         rpc error: code = Unknown desc = [failed to set up sandbox container "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" network for pod "coredns-6d4b75cb6d-cs7mc": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-cs7mc_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" network for pod "coredns-6d4b75cb6d-cs7mc": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-cs7mc_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-c72d849db4d2a0ebafe04821 -m comment --comment name: "crio" id: "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c72d849db4d2a0ebafe04821':No such file or directory
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         Try `iptables -h' or 'iptables --help' for more information.
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         ]
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:  > pod="kube-system/coredns-6d4b75cb6d-cs7mc"
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:20.794306    3486 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         rpc error: code = Unknown desc = [failed to set up sandbox container "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" network for pod "coredns-6d4b75cb6d-cs7mc": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-cs7mc_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" network for pod "coredns-6d4b75cb6d-cs7mc": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-cs7mc_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-c72d849db4d2a0ebafe04821 -m comment --comment name: "crio" id: "08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c72d849db4d2a0ebafe04821':No such file or directory
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         Try `iptables -h' or 'iptables --help' for more information.
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:         ]
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]:  > pod="kube-system/coredns-6d4b75cb6d-cs7mc"
	Aug 02 00:53:20 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:20.794377    3486 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-cs7mc_kube-system(c15c9885-12b6-401a-80b5-306326ed8760)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-cs7mc_kube-system(c15c9885-12b6-401a-80b5-306326ed8760)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e\\\" network for pod \\\"coredns-6d4b75cb6d-cs7mc\\\": networkPlugin cni failed to set up pod \\\"coredns-6d4b75cb6d-cs7mc_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e\\\" network for pod \\\"coredns-6d4b75cb6d-cs7mc\\\": networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-cs7mc_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-c72d849db4d2a0ebafe04821 -m comment --comment name: \\\"crio\\\" id: \\\"08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c72d849db4d2a0ebafe04821':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-6d4b75cb6d-cs7mc" podUID=c15c9885-12b6-401a-80b5-306326ed8760
	Aug 02 00:53:21 newest-cni-20220801175129-13911 kubelet[3486]: I0802 00:53:21.069284    3486 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="43ce81cc04d559e205749e9a976366ad56aed407ef4d30a5e162b59e12d48874"
	Aug 02 00:53:21 newest-cni-20220801175129-13911 kubelet[3486]: I0802 00:53:21.069307    3486 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e"
	
	* 
	* ==> storage-provisioner [c8176b7cf5ae] <==
	* I0802 00:52:39.315258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0802 00:53:15.607724       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220801175129-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220801175129-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.995508146s)
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-cs7mc metrics-server-5c6f97fb75-qwvtt dashboard-metrics-scraper-dffd48c4c-k7hpg kubernetes-dashboard-5fd5574d9f-h9lkj
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220801175129-13911 describe pod coredns-6d4b75cb6d-cs7mc metrics-server-5c6f97fb75-qwvtt dashboard-metrics-scraper-dffd48c4c-k7hpg kubernetes-dashboard-5fd5574d9f-h9lkj
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220801175129-13911 describe pod coredns-6d4b75cb6d-cs7mc metrics-server-5c6f97fb75-qwvtt dashboard-metrics-scraper-dffd48c4c-k7hpg kubernetes-dashboard-5fd5574d9f-h9lkj: exit status 1 (207.57345ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-cs7mc" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-qwvtt" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-k7hpg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-h9lkj" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220801175129-13911 describe pod coredns-6d4b75cb6d-cs7mc metrics-server-5c6f97fb75-qwvtt dashboard-metrics-scraper-dffd48c4c-k7hpg kubernetes-dashboard-5fd5574d9f-h9lkj: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220801175129-13911
helpers_test.go:235: (dbg) docker inspect newest-cni-20220801175129-13911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238",
	        "Created": "2022-08-02T00:51:36.137976042Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316296,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-02T00:52:24.903576593Z",
	            "FinishedAt": "2022-08-02T00:52:22.987858752Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238/hostname",
	        "HostsPath": "/var/lib/docker/containers/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238/hosts",
	        "LogPath": "/var/lib/docker/containers/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238/975a147da1dcb8b9bc22e5d95ce97fad8314e2e6be7a3b765bae891eb0388238-json.log",
	        "Name": "/newest-cni-20220801175129-13911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20220801175129-13911:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220801175129-13911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e82e7385558f16e0b3468f1017b5759826988a63794b16cf4a23999d15d07831-init/diff:/var/lib/docker/overlay2/be71a36154d4edc0752055d83a2e555b4962558d96f6429a0d41e4e68d05b63f/diff:/var/lib/docker/overlay2/ba7421c2975740268085547b1655aae5391d13c1075ebb4ba1f8117f463bd4c2/diff:/var/lib/docker/overlay2/465cf41fe65260bf7242026f69bda2df1b41f3c6cb1c2c4cebc85c67fd7a301a/diff:/var/lib/docker/overlay2/66adbc624e035a6faee6aa46768772cddc9bb74546970d2c2c0d6743e0829994/diff:/var/lib/docker/overlay2/09f000d5dd30f003d32767873ae37cc903ac89581a71d8886bc5aeb322aa3ab4/diff:/var/lib/docker/overlay2/be471f380264ec1eb9748d6553eede2d6edf02682634c67a08e204b723e2d20d/diff:/var/lib/docker/overlay2/80869be11aaeb74fb37405a1a24a672ee70ac196ca84d28c5c8302e8312d7d67/diff:/var/lib/docker/overlay2/94d33074056c62a8da7d87adaa9ab73ffd1c948aaa113278ba494a4f88646ddb/diff:/var/lib/docker/overlay2/d34b58bbf8e2ecb5714576ab502102b0acc61f22493fb57d3b14df7ec8419564/diff:/var/lib/docker/overlay2/d03e4e
c465b0cb9c7e09598748bcbb7474edb27c8668607ef946874593b50587/diff:/var/lib/docker/overlay2/e205c5560188b8cec4a1d9289cc3dd1c762cf262a85b104ff324f13ccfa313af/diff:/var/lib/docker/overlay2/2fadab3eaa886a1d31feac41bbcc407d35694df52fef8eb2d592447dbe023c4d/diff:/var/lib/docker/overlay2/4f3477d6c696a4b38462ce1c91ed04b16e03012bcfe21c0eb30fb83ae775607f/diff:/var/lib/docker/overlay2/96ac290d92c134fcfed82a6e4abf9392437b5cd36d6079838f8a4c624f8a54d2/diff:/var/lib/docker/overlay2/43819e42b77cf6affba5f7e88626f581e04b88e6704601938a854a66e926430b/diff:/var/lib/docker/overlay2/736ccb40af76df428b935706748e3603bc6c2d42df6d44f2acdd1c762acda416/diff:/var/lib/docker/overlay2/d634abf8480afd35a78f6b621b6230bfe5eea8ed58e89a88db0ad4f96dca3737/diff:/var/lib/docker/overlay2/375f400edfc97c07e9a24b17e81dfe325254b86653c7154904051371059d9de3/diff:/var/lib/docker/overlay2/c7b941645b851a8a313e0dc08724fd380a1d985a803de048050d2824e748d693/diff:/var/lib/docker/overlay2/20db6f1d248b30381ae84de894ba5e6dec30126e2deded7b864d3b73d0cf8d8b/diff:/var/lib/d
ocker/overlay2/5d21ba108b65674a68678a637f12ecd28ac924f9d1bbdc4efceb9990b42e7306/diff:/var/lib/docker/overlay2/23116ad74e7cb919e8fcd63553bc912de522d721afd3dfd94cebc493f48c9ed0/diff:/var/lib/docker/overlay2/192ee5faaac01c20f16a6adb373ec285276bd581a68273e58e18cbaa82b68e5f/diff:/var/lib/docker/overlay2/e4f0c7dd9dc509b456b365323a3239ec8f43a940eaf84d79ba412443526ea62a/diff:/var/lib/docker/overlay2/a0f3d7a9c7d29646e98f98b5cef53138548e68b89b10991f862bfd3b8e6e701e/diff:/var/lib/docker/overlay2/4e791dc0cf0bf0b4a4721985859a02f8262dbc612626048423f93c2095fd74a5/diff:/var/lib/docker/overlay2/ff33ea15b646cb4188c4598ceff2caa62dc99778f8c262ae7237001367fe9efa/diff:/var/lib/docker/overlay2/9fde99091a6ba3894a270c8a87f0b08a83d064aab1c289c4dc8898a2a7398c16/diff:/var/lib/docker/overlay2/09d8aeda2f2737d2e8f6c570f900a09028727e4281e8c78cd9c3c40e82e94b25/diff:/var/lib/docker/overlay2/57afb992e1473b91847c620cd5f4db3c83741a5147ddc56436a4b602d51f6ace/diff:/var/lib/docker/overlay2/8714e3633b5414b4d41bca8f718e7e777cef6d719b3990b1fa6b1fe8ffd
1a52a/diff:/var/lib/docker/overlay2/af6db37ea0e72105b606f634e174acb8122a35a91b0130533afb45bfbce82f18/diff:/var/lib/docker/overlay2/821fa7a594ba5e494d6ba1e650f3e00f3ca87444f5da095b2be780a56f53d3ae/diff:/var/lib/docker/overlay2/35fcd5785426a72cf1a42e850e5e46382e163669915e6f52656a7442242f570c/diff:/var/lib/docker/overlay2/226ffc769fcd00df0815916d6c9bf671db5f35974b5856c1c23218983f93cbd1/diff:/var/lib/docker/overlay2/29607f8d067ae0a92a8f4417c62b3dba3cea8ee95f1b49679b9b74e960b66e4c/diff:/var/lib/docker/overlay2/a35e15ba7f86a910793d725c33ea06a6da4e13469b63ee617578c5d3f95d70a1/diff:/var/lib/docker/overlay2/92405078febfdb1bcff4ce6f517da24989ea614c2a9150226bf1e18f71e0d77e/diff:/var/lib/docker/overlay2/5bb3f1544e56efddd074f9e35dec621d462e58813ff5669eaebabdb2c6f99924/diff:/var/lib/docker/overlay2/cfee9b1d30fb0331ef891ce98c552adcbe50e9e515fb92c3f7c9785322773b8a/diff:/var/lib/docker/overlay2/bf1b585fc239a7f0a8b5f944ece94e927686403328792ebc73eadf096967950d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e82e7385558f16e0b3468f1017b5759826988a63794b16cf4a23999d15d07831/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e82e7385558f16e0b3468f1017b5759826988a63794b16cf4a23999d15d07831/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e82e7385558f16e0b3468f1017b5759826988a63794b16cf4a23999d15d07831/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220801175129-13911",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220801175129-13911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220801175129-13911",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220801175129-13911",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220801175129-13911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74dcba454ad55feae7c3d27289ba99cf4223d973f07a3a68f95ce835b9c5bf74",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52997"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52998"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52999"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53000"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/74dcba454ad5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220801175129-13911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "975a147da1dc",
	                        "newest-cni-20220801175129-13911"
	                    ],
	                    "NetworkID": "ae6cf09ebf463df749fa44ed1b4f2989ff992fc2a5100c17f88bd79c2165a910",
	                    "EndpointID": "c45d16379854d6eb795ba6c339d0a7fd5fac8d8c29c5077f13d38cd6b586af10",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220801175129-13911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220801175129-13911 logs -n 25: (4.432085807s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:37 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:37 PDT | 01 Aug 22 17:42 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220801173626-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:43 PDT |
	|         | no-preload-20220801173626-13911                            |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:43 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:44 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:44 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:45 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:45 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:50 PDT | 01 Aug 22 17:50 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220801174348-13911 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:51 PDT |
	|         | default-k8s-different-port-20220801174348-13911            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220801175129-13911 --memory=2200           | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:51 PDT | 01 Aug 22 17:52 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220801175129-13911 --memory=2200           | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:52 PDT | 01 Aug 22 17:52 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220801175129-13911                 | jenkins | v1.26.0 | 01 Aug 22 17:53 PDT | 01 Aug 22 17:53 PDT |
	|         | newest-cni-20220801175129-13911                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 17:52:23
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 17:52:23.673228   32787 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:52:23.673420   32787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:52:23.673426   32787 out.go:309] Setting ErrFile to fd 2...
	I0801 17:52:23.673430   32787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:52:23.673533   32787 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:52:23.673982   32787 out.go:303] Setting JSON to false
	I0801 17:52:23.688935   32787 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10314,"bootTime":1659391229,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 17:52:23.689050   32787 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 17:52:23.710547   32787 out.go:177] * [newest-cni-20220801175129-13911] minikube v1.26.0 on Darwin 12.5
	I0801 17:52:23.732744   32787 notify.go:193] Checking for updates...
	I0801 17:52:23.754303   32787 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 17:52:23.776387   32787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:23.797749   32787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 17:52:23.819483   32787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 17:52:23.841446   32787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 17:52:23.863222   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:23.863894   32787 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 17:52:23.933038   32787 docker.go:137] docker version: linux-20.10.17
	I0801 17:52:23.933180   32787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:52:24.067710   32787 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:52:24.000385977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:52:24.109876   32787 out.go:177] * Using the docker driver based on existing profile
	I0801 17:52:24.130849   32787 start.go:284] selected driver: docker
	I0801 17:52:24.130895   32787 start.go:808] validating driver "docker" against &{Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:24.131069   32787 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 17:52:24.134420   32787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 17:52:24.267922   32787 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-02 00:52:24.202408606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 17:52:24.268091   32787 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0801 17:52:24.268108   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:24.268117   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:24.268125   32787 start_flags.go:310] config:
	{Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:24.289987   32787 out.go:177] * Starting control plane node newest-cni-20220801175129-13911 in cluster newest-cni-20220801175129-13911
	I0801 17:52:24.311931   32787 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 17:52:24.333756   32787 out.go:177] * Pulling base image ...
	I0801 17:52:24.375960   32787 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 17:52:24.375962   32787 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:52:24.376098   32787 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 17:52:24.376118   32787 cache.go:57] Caching tarball of preloaded images
	I0801 17:52:24.376312   32787 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 17:52:24.377040   32787 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0801 17:52:24.379371   32787 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/config.json ...
	I0801 17:52:24.441569   32787 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 17:52:24.441586   32787 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 17:52:24.441597   32787 cache.go:208] Successfully downloaded all kic artifacts
	I0801 17:52:24.441636   32787 start.go:371] acquiring machines lock for newest-cni-20220801175129-13911: {Name:mk442d39e1f1a32a0afed4f835844094a50c76c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 17:52:24.441709   32787 start.go:375] acquired machines lock for "newest-cni-20220801175129-13911" in 57.497µs
	I0801 17:52:24.441728   32787 start.go:95] Skipping create...Using existing machine configuration
	I0801 17:52:24.441736   32787 fix.go:55] fixHost starting: 
	I0801 17:52:24.441948   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:24.509430   32787 fix.go:103] recreateIfNeeded on newest-cni-20220801175129-13911: state=Stopped err=<nil>
	W0801 17:52:24.509459   32787 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 17:52:24.531450   32787 out.go:177] * Restarting existing docker container for "newest-cni-20220801175129-13911" ...
	I0801 17:52:24.553225   32787 cli_runner.go:164] Run: docker start newest-cni-20220801175129-13911
	I0801 17:52:24.902900   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:24.976389   32787 kic.go:415] container "newest-cni-20220801175129-13911" state is running.
	I0801 17:52:24.977146   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:25.050538   32787 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/config.json ...
	I0801 17:52:25.050942   32787 machine.go:88] provisioning docker machine ...
	I0801 17:52:25.050970   32787 ubuntu.go:169] provisioning hostname "newest-cni-20220801175129-13911"
	I0801 17:52:25.051043   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.126021   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.126221   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.126238   32787 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220801175129-13911 && echo "newest-cni-20220801175129-13911" | sudo tee /etc/hostname
	I0801 17:52:25.248738   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220801175129-13911
	
	I0801 17:52:25.248824   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.321597   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.321746   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.321766   32787 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220801175129-13911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220801175129-13911/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220801175129-13911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 17:52:25.434932   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:52:25.434954   32787 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 17:52:25.434989   32787 ubuntu.go:177] setting up certificates
	I0801 17:52:25.435000   32787 provision.go:83] configureAuth start
	I0801 17:52:25.435069   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:25.507735   32787 provision.go:138] copyHostCerts
	I0801 17:52:25.507826   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 17:52:25.507858   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 17:52:25.507959   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 17:52:25.508136   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 17:52:25.508145   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 17:52:25.508210   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 17:52:25.508393   32787 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 17:52:25.508399   32787 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 17:52:25.508476   32787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1679 bytes)
	I0801 17:52:25.508593   32787 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220801175129-13911 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220801175129-13911]
	I0801 17:52:25.689638   32787 provision.go:172] copyRemoteCerts
	I0801 17:52:25.689698   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 17:52:25.689743   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.760221   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:25.842816   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 17:52:25.859614   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0801 17:52:25.877787   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0801 17:52:25.894499   32787 provision.go:86] duration metric: configureAuth took 459.438174ms
	I0801 17:52:25.894511   32787 ubuntu.go:193] setting minikube options for container-runtime
	I0801 17:52:25.894655   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:25.894705   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:25.965303   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:25.965461   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:25.965476   32787 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0801 17:52:26.081993   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0801 17:52:26.082006   32787 ubuntu.go:71] root file system type: overlay
	I0801 17:52:26.082200   32787 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0801 17:52:26.082279   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.152899   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:26.153058   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:26.153124   32787 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0801 17:52:26.273565   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0801 17:52:26.273643   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.344500   32787 main.go:134] libmachine: Using SSH client type: native
	I0801 17:52:26.344674   32787 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52996 <nil> <nil>}
	I0801 17:52:26.344687   32787 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0801 17:52:26.461478   32787 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 17:52:26.461494   32787 machine.go:91] provisioned docker machine in 1.410404064s
	I0801 17:52:26.461511   32787 start.go:307] post-start starting for "newest-cni-20220801175129-13911" (driver="docker")
	I0801 17:52:26.461517   32787 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 17:52:26.461594   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 17:52:26.461638   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.533240   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.617860   32787 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 17:52:26.621187   32787 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 17:52:26.621203   32787 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 17:52:26.621209   32787 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 17:52:26.621218   32787 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 17:52:26.621226   32787 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 17:52:26.621342   32787 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 17:52:26.621472   32787 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem -> 139112.pem in /etc/ssl/certs
	I0801 17:52:26.621619   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 17:52:26.628457   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:52:26.645485   32787 start.go:310] post-start completed in 183.946981ms
	I0801 17:52:26.645554   32787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 17:52:26.645614   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.716259   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.799945   32787 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 17:52:26.804224   32787 fix.go:57] fixHost completed within 2.362249069s
	I0801 17:52:26.804235   32787 start.go:82] releasing machines lock for "newest-cni-20220801175129-13911", held for 2.362281129s
	I0801 17:52:26.804320   32787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220801175129-13911
	I0801 17:52:26.873689   32787 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 17:52:26.873695   32787 ssh_runner.go:195] Run: systemctl --version
	I0801 17:52:26.873749   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.873757   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:26.947817   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:26.950793   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:27.027323   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0801 17:52:27.218560   32787 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0801 17:52:27.231317   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:27.308327   32787 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0801 17:52:27.385719   32787 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0801 17:52:27.395516   32787 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0801 17:52:27.395572   32787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 17:52:27.404611   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 17:52:27.416899   32787 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0801 17:52:27.490191   32787 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0801 17:52:27.556237   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:27.626123   32787 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0801 17:52:27.863853   32787 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0801 17:52:27.937016   32787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 17:52:28.004508   32787 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0801 17:52:28.013670   32787 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0801 17:52:28.013735   32787 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0801 17:52:28.017472   32787 start.go:471] Will wait 60s for crictl version
	I0801 17:52:28.017516   32787 ssh_runner.go:195] Run: sudo crictl version
	I0801 17:52:28.045591   32787 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0801 17:52:28.045657   32787 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:52:28.081103   32787 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0801 17:52:28.139569   32787 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0801 17:52:28.139751   32787 cli_runner.go:164] Run: docker exec -t newest-cni-20220801175129-13911 dig +short host.docker.internal
	I0801 17:52:28.271354   32787 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0801 17:52:28.271611   32787 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0801 17:52:28.276062   32787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:52:28.285339   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:28.377645   32787 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0801 17:52:28.400051   32787 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 17:52:28.400188   32787 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:52:28.430349   32787 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0801 17:52:28.430364   32787 docker.go:542] Images already preloaded, skipping extraction
	I0801 17:52:28.430425   32787 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0801 17:52:28.459887   32787 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0801 17:52:28.459907   32787 cache_images.go:84] Images are preloaded, skipping loading
	I0801 17:52:28.459999   32787 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0801 17:52:28.538859   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:28.538872   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:28.538886   32787 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0801 17:52:28.538897   32787 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220801175129-13911 NodeName:newest-cni-20220801175129-13911 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 17:52:28.539006   32787 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220801175129-13911"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 17:52:28.539092   32787 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220801175129-13911 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 17:52:28.539154   32787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 17:52:28.547103   32787 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 17:52:28.547163   32787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 17:52:28.554183   32787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0801 17:52:28.566838   32787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 17:52:28.579088   32787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0801 17:52:28.592069   32787 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 17:52:28.595735   32787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 17:52:28.605024   32787 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911 for IP: 192.168.67.2
	I0801 17:52:28.605135   32787 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 17:52:28.605189   32787 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 17:52:28.605266   32787 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/client.key
	I0801 17:52:28.605323   32787 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.key.c7fa3a9e
	I0801 17:52:28.605376   32787 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.key
	I0801 17:52:28.606246   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem (1338 bytes)
	W0801 17:52:28.606339   32787 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911_empty.pem, impossibly tiny 0 bytes
	I0801 17:52:28.606357   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 17:52:28.606485   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 17:52:28.606564   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 17:52:28.606614   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1679 bytes)
	I0801 17:52:28.606880   32787 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem (1708 bytes)
	I0801 17:52:28.607387   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 17:52:28.624412   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 17:52:28.641432   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 17:52:28.657666   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/newest-cni-20220801175129-13911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 17:52:28.674229   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 17:52:28.719270   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0801 17:52:28.736654   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 17:52:28.753158   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0801 17:52:28.770469   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/139112.pem --> /usr/share/ca-certificates/139112.pem (1708 bytes)
	I0801 17:52:28.787048   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 17:52:28.803970   32787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/13911.pem --> /usr/share/ca-certificates/13911.pem (1338 bytes)
	I0801 17:52:28.821159   32787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 17:52:28.833579   32787 ssh_runner.go:195] Run: openssl version
	I0801 17:52:28.839216   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13911.pem && ln -fs /usr/share/ca-certificates/13911.pem /etc/ssl/certs/13911.pem"
	I0801 17:52:28.846925   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.850785   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 23:39 /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.850830   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13911.pem
	I0801 17:52:28.855816   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13911.pem /etc/ssl/certs/51391683.0"
	I0801 17:52:28.862898   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139112.pem && ln -fs /usr/share/ca-certificates/139112.pem /etc/ssl/certs/139112.pem"
	I0801 17:52:28.870230   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.874033   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 23:39 /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.874072   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139112.pem
	I0801 17:52:28.879145   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139112.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 17:52:28.886176   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 17:52:28.893778   32787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.897545   32787 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 23:36 /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.897586   32787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 17:52:28.902789   32787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 17:52:28.910029   32787 kubeadm.go:395] StartCluster: {Name:newest-cni-20220801175129-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220801175129-13911 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 17:52:28.910133   32787 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:52:28.938635   32787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 17:52:28.946148   32787 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 17:52:28.946164   32787 kubeadm.go:626] restartCluster start
	I0801 17:52:28.946212   32787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 17:52:28.953014   32787 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:28.953076   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:29.024288   32787 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220801175129-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:29.024479   32787 kubeconfig.go:127] "newest-cni-20220801175129-13911" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 17:52:29.024804   32787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:29.026040   32787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 17:52:29.033882   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.033956   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.042444   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.243382   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.243445   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.252434   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.444705   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.444803   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.455884   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.644689   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.644878   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.656009   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:29.844691   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:29.844898   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:29.855271   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.044484   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.044572   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.054992   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.244551   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.244703   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.256059   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.443337   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.443520   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.454029   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.642687   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.642858   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.653207   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:30.842691   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:30.842787   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:30.851677   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.044755   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.044936   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.056082   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.244798   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.244988   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.255847   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.442880   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.442988   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.453367   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.642749   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.642915   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.652975   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:31.844827   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:31.845046   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:31.855350   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.043498   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:32.043651   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:32.053903   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.053914   32787 api_server.go:165] Checking apiserver status ...
	I0801 17:52:32.053961   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 17:52:32.061661   32787 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.061673   32787 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 17:52:32.061681   32787 kubeadm.go:1092] stopping kube-system containers ...
	I0801 17:52:32.061735   32787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0801 17:52:32.091970   32787 docker.go:443] Stopping containers: [504a8b59e1ce 6ada3ab8487d e7ffafb0ce3f 46e43480cef2 28e90bf32a64 6686f00cb0ec 0da10eabf430 9e2b4b1800e1 ed072705134c ae7511f543c8 af02fe8a2673 42d0d44d7c6f d698d4a20553 06a54abbd12b aeff65b18cdf d9acb50e1a8c]
	I0801 17:52:32.092047   32787 ssh_runner.go:195] Run: docker stop 504a8b59e1ce 6ada3ab8487d e7ffafb0ce3f 46e43480cef2 28e90bf32a64 6686f00cb0ec 0da10eabf430 9e2b4b1800e1 ed072705134c ae7511f543c8 af02fe8a2673 42d0d44d7c6f d698d4a20553 06a54abbd12b aeff65b18cdf d9acb50e1a8c
	I0801 17:52:32.121345   32787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 17:52:32.131474   32787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 17:52:32.139435   32787 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  2 00:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  2 00:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug  2 00:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug  2 00:51 /etc/kubernetes/scheduler.conf
	
	I0801 17:52:32.139495   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 17:52:32.146996   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 17:52:32.154372   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 17:52:32.161557   32787 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.161606   32787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 17:52:32.168595   32787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 17:52:32.175658   32787 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 17:52:32.175708   32787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 17:52:32.182506   32787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 17:52:32.190951   32787 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 17:52:32.190967   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:32.240435   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:32.997584   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.179336   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.228816   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:33.285208   32787 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:52:33.285269   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:33.818695   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:34.318201   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:34.329612   32787 api_server.go:71] duration metric: took 1.044336555s to wait for apiserver process to appear ...
	I0801 17:52:34.329626   32787 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:52:34.329634   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:34.330719   32787 api_server.go:256] stopped: https://127.0.0.1:53000/healthz: Get "https://127.0.0.1:53000/healthz": EOF
	I0801 17:52:34.831850   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:37.477511   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 17:52:37.477526   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 17:52:37.831072   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:37.839203   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:52:37.839219   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:52:38.331454   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:38.338440   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 17:52:38.338453   32787 api_server.go:102] status: https://127.0.0.1:53000/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 17:52:38.831673   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:38.839175   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 200:
	ok
	I0801 17:52:38.845486   32787 api_server.go:140] control plane version: v1.24.3
	I0801 17:52:38.845498   32787 api_server.go:130] duration metric: took 4.515618355s to wait for apiserver health ...
	I0801 17:52:38.845504   32787 cni.go:95] Creating CNI manager for ""
	I0801 17:52:38.845508   32787 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 17:52:38.845520   32787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:52:38.851916   32787 system_pods.go:59] 8 kube-system pods found
	I0801 17:52:38.851934   32787 system_pods.go:61] "coredns-6d4b75cb6d-cs7mc" [c15c9885-12b6-401a-80b5-306326ed8760] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0801 17:52:38.851947   32787 system_pods.go:61] "etcd-newest-cni-20220801175129-13911" [6c0faf34-6ed0-45fb-8af0-d822ee539d57] Running
	I0801 17:52:38.851952   32787 system_pods.go:61] "kube-apiserver-newest-cni-20220801175129-13911" [faf7abbe-9d33-4c77-89e7-5ee799592377] Running
	I0801 17:52:38.851956   32787 system_pods.go:61] "kube-controller-manager-newest-cni-20220801175129-13911" [eb59c99e-98e9-44e8-bf4c-d8237aaa34ab] Running
	I0801 17:52:38.851961   32787 system_pods.go:61] "kube-proxy-2pmw7" [b621ae1b-52fc-4d15-b7bd-b6b9d074d419] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:52:38.851966   32787 system_pods.go:61] "kube-scheduler-newest-cni-20220801175129-13911" [c70c5eb8-13e4-400c-aa52-2a94e85f0c5e] Running
	I0801 17:52:38.851970   32787 system_pods.go:61] "metrics-server-5c6f97fb75-qwvtt" [6f1f27bb-dc60-477b-9476-b02a8d1c7b00] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:52:38.851975   32787 system_pods.go:61] "storage-provisioner" [bfbcaa76-3903-4a2c-9081-426d2c26ec38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:52:38.851979   32787 system_pods.go:74] duration metric: took 6.454438ms to wait for pod list to return data ...
	I0801 17:52:38.851985   32787 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:52:38.854828   32787 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:52:38.854841   32787 node_conditions.go:123] node cpu capacity is 6
	I0801 17:52:38.854849   32787 node_conditions.go:105] duration metric: took 2.86028ms to run NodePressure ...
	I0801 17:52:38.854858   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 17:52:39.017659   32787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 17:52:39.028040   32787 ops.go:34] apiserver oom_adj: -16
	I0801 17:52:39.028053   32787 kubeadm.go:630] restartCluster took 10.081237328s
	I0801 17:52:39.028063   32787 kubeadm.go:397] StartCluster complete in 10.117389461s
	I0801 17:52:39.028076   32787 settings.go:142] acquiring lock: {Name:mkb750de191cb38457e38d69c03dcc8fc94e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:39.028149   32787 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 17:52:39.028753   32787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mkf11dae705688f81ce95312c42c3b9be893d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 17:52:39.032857   32787 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220801175129-13911" rescaled to 1
	I0801 17:52:39.032920   32787 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0801 17:52:39.032937   32787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 17:52:39.033003   32787 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 17:52:39.033212   32787 config.go:180] Loaded profile config "newest-cni-20220801175129-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:52:39.058242   32787 out.go:177] * Verifying Kubernetes components...
	I0801 17:52:39.058371   32787 addons.go:65] Setting dashboard=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.094923   32787 addons.go:153] Setting addon dashboard=true in "newest-cni-20220801175129-13911"
	I0801 17:52:39.058373   32787 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.094932   32787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0801 17:52:39.094946   32787 addons.go:162] addon dashboard should already be in state true
	I0801 17:52:39.094970   32787 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220801175129-13911"
	W0801 17:52:39.094990   32787 addons.go:162] addon storage-provisioner should already be in state true
	I0801 17:52:39.058371   32787 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.095021   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.058385   32787 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220801175129-13911"
	I0801 17:52:39.095060   32787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220801175129-13911"
	I0801 17:52:39.095084   32787 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220801175129-13911"
	W0801 17:52:39.095103   32787 addons.go:162] addon metrics-server should already be in state true
	I0801 17:52:39.095109   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.095175   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.095545   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.095796   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.097115   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.097117   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.146335   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.146341   32787 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0801 17:52:39.235029   32787 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220801175129-13911"
	I0801 17:52:39.251541   32787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 17:52:39.271211   32787 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	W0801 17:52:39.271269   32787 addons.go:162] addon default-storageclass should already be in state true
	I0801 17:52:39.308909   32787 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:52:39.330553   32787 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 17:52:39.330575   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 17:52:39.330621   32787 host.go:66] Checking if "newest-cni-20220801175129-13911" exists ...
	I0801 17:52:39.406629   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 17:52:39.427343   32787 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 17:52:39.427370   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 17:52:39.407092   32787 cli_runner.go:164] Run: docker container inspect newest-cni-20220801175129-13911 --format={{.State.Status}}
	I0801 17:52:39.465342   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 17:52:39.465354   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 17:52:39.406765   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.427563   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.460245   32787 api_server.go:51] waiting for apiserver process to appear ...
	I0801 17:52:39.465420   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.465467   32787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 17:52:39.484866   32787 api_server.go:71] duration metric: took 451.841665ms to wait for apiserver process to appear ...
	I0801 17:52:39.484924   32787 api_server.go:87] waiting for apiserver healthz status ...
	I0801 17:52:39.484971   32787 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53000/healthz ...
	I0801 17:52:39.494690   32787 api_server.go:266] https://127.0.0.1:53000/healthz returned 200:
	ok
	I0801 17:52:39.497264   32787 api_server.go:140] control plane version: v1.24.3
	I0801 17:52:39.497280   32787 api_server.go:130] duration metric: took 12.34487ms to wait for apiserver health ...
	I0801 17:52:39.497288   32787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 17:52:39.508373   32787 system_pods.go:59] 8 kube-system pods found
	I0801 17:52:39.508410   32787 system_pods.go:61] "coredns-6d4b75cb6d-cs7mc" [c15c9885-12b6-401a-80b5-306326ed8760] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0801 17:52:39.508419   32787 system_pods.go:61] "etcd-newest-cni-20220801175129-13911" [6c0faf34-6ed0-45fb-8af0-d822ee539d57] Running
	I0801 17:52:39.508427   32787 system_pods.go:61] "kube-apiserver-newest-cni-20220801175129-13911" [faf7abbe-9d33-4c77-89e7-5ee799592377] Running
	I0801 17:52:39.508439   32787 system_pods.go:61] "kube-controller-manager-newest-cni-20220801175129-13911" [eb59c99e-98e9-44e8-bf4c-d8237aaa34ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0801 17:52:39.508448   32787 system_pods.go:61] "kube-proxy-2pmw7" [b621ae1b-52fc-4d15-b7bd-b6b9d074d419] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0801 17:52:39.508455   32787 system_pods.go:61] "kube-scheduler-newest-cni-20220801175129-13911" [c70c5eb8-13e4-400c-aa52-2a94e85f0c5e] Running
	I0801 17:52:39.508464   32787 system_pods.go:61] "metrics-server-5c6f97fb75-qwvtt" [6f1f27bb-dc60-477b-9476-b02a8d1c7b00] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 17:52:39.508473   32787 system_pods.go:61] "storage-provisioner" [bfbcaa76-3903-4a2c-9081-426d2c26ec38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0801 17:52:39.508483   32787 system_pods.go:74] duration metric: took 11.189474ms to wait for pod list to return data ...
	I0801 17:52:39.508489   32787 default_sa.go:34] waiting for default service account to be created ...
	I0801 17:52:39.513490   32787 default_sa.go:45] found service account: "default"
	I0801 17:52:39.513508   32787 default_sa.go:55] duration metric: took 5.01286ms for default service account to be created ...
	I0801 17:52:39.513520   32787 kubeadm.go:572] duration metric: took 480.525733ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0801 17:52:39.513574   32787 node_conditions.go:102] verifying NodePressure condition ...
	I0801 17:52:39.519363   32787 node_conditions.go:122] node storage ephemeral capacity is 115334268Ki
	I0801 17:52:39.519381   32787 node_conditions.go:123] node cpu capacity is 6
	I0801 17:52:39.519391   32787 node_conditions.go:105] duration metric: took 5.809738ms to run NodePressure ...
	I0801 17:52:39.519419   32787 start.go:216] waiting for startup goroutines ...
	I0801 17:52:39.586557   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.587672   32787 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 17:52:39.587683   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 17:52:39.587736   32787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220801175129-13911
	I0801 17:52:39.588232   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.590309   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.669406   32787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52996 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/newest-cni-20220801175129-13911/id_rsa Username:docker}
	I0801 17:52:39.720677   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 17:52:39.720697   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 17:52:39.723721   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 17:52:39.723735   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 17:52:39.732299   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 17:52:39.803502   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 17:52:39.803524   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 17:52:39.807057   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 17:52:39.807074   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 17:52:39.824461   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 17:52:39.824576   32787 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:52:39.824587   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 17:52:39.831720   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 17:52:39.831736   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 17:52:39.908512   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 17:52:39.914142   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 17:52:39.914183   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 17:52:39.936302   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 17:52:39.936320   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 17:52:40.028581   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 17:52:40.028597   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 17:52:40.130449   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 17:52:40.130463   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 17:52:40.215708   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 17:52:40.215722   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 17:52:40.232043   32787 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:52:40.232058   32787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 17:52:40.250218   32787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 17:52:40.829184   32787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096813176s)
	I0801 17:52:40.829230   32787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.004700844s)
	I0801 17:52:40.843288   32787 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220801175129-13911"
	I0801 17:52:40.938463   32787 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 17:52:40.975473   32787 addons.go:414] enableAddons completed in 1.942433849s
	I0801 17:52:41.005347   32787 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0801 17:52:41.027519   32787 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220801175129-13911" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-08-02 00:52:25 UTC, end at Tue 2022-08-02 00:53:25 UTC. --
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.813852910Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.848115938Z" level=info msg="Loading containers: done."
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.856515700Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.856579228Z" level=info msg="Daemon has completed initialization"
	Aug 02 00:52:27 newest-cni-20220801175129-13911 systemd[1]: Started Docker Application Container Engine.
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.876738013Z" level=info msg="API listen on [::]:2376"
	Aug 02 00:52:27 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:27.882830943Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 02 00:52:39 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:39.708755992Z" level=info msg="ignoring event" container=c993c9155a2e96045557d445db4a5acf7f0f83e87e4170c02114731d2230f6d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:40 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:40.249572703Z" level=info msg="ignoring event" container=b0932aedb371472e128b715087244d6cd2e834d94a93d47b71529d56fd99a1e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:41 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:41.570436507Z" level=info msg="ignoring event" container=985847ae7a12736848a800899d12c3126170598d451834980676f98a26fdcf79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:41 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:41.579243072Z" level=info msg="ignoring event" container=b43c3fd1863143f2c0fcb438ed79bed2615fda00b4e4caf1ea1eaa60851d1ceb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:42 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:42.541670478Z" level=info msg="ignoring event" container=5e9e6be0892b9d81209c89019f7b4a9b9fd7d06bda8a56c03d1171a1595849a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:52:42 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:52:42.681775388Z" level=info msg="ignoring event" container=a70c7db0f9385f277d5e5ae2fdc2f3040458a4cf6a4e4454b0ac7f5419c7c833 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:15 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:15.709130914Z" level=info msg="ignoring event" container=c8176b7cf5aecc2f604f38ccecc9238511716f438707cc65ce84b44ce9625151 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:19 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:19.627670044Z" level=info msg="ignoring event" container=43ce81cc04d559e205749e9a976366ad56aed407ef4d30a5e162b59e12d48874 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:20 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:20.319471685Z" level=info msg="ignoring event" container=984184e251a2b846d54ad46b5de058e027126b0122e5054b5f6f1fc6e375e194 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:20 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:20.773779455Z" level=info msg="ignoring event" container=08ae5669cc9559e5e9283d2a6255a4853466e6269068a0ecc1c694329ed5971e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:21 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:21.761029899Z" level=info msg="ignoring event" container=c427ec160b339f7c60259273e2c0cf45535e9b7a289a867cec3ee22bb7ff9b31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:21 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:21.762369980Z" level=info msg="ignoring event" container=3b37e03b92ef9aadb6e0b5565298e40c6f16d0646ebb077c2cf24547fb16b107 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:22 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:22.636293569Z" level=info msg="ignoring event" container=5a616bd441a21f5710571400a37f441ef28ee6f291ffca5b815e2e821b407820 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:22 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:22.636345531Z" level=info msg="ignoring event" container=d23cf9b103959ee9025006eacecc30f60eef03b80671f993a6f4055887ca7e59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:24 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:24.570741607Z" level=info msg="ignoring event" container=83bec1b8acbd11ebeba55643298b8b73055203d71d4d686404df24672e307f92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:24 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:24.594616622Z" level=info msg="ignoring event" container=f73ccb858e4101a77fa209936d90c58b657a8780cd10a1d01130bee1e22eca60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:25 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:25.582572547Z" level=info msg="ignoring event" container=ce59b46f982464a4e08c9ae25e3f3c5b70812f6a37f7c64bb5d5c62c7f679192 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 00:53:25 newest-cni-20220801175129-13911 dockerd[541]: time="2022-08-02T00:53:25.613708440Z" level=info msg="ignoring event" container=7ebfa359364fc87e868cd415a7b9e5cef93a5aea4ebbbf5ec5b180ac5e1c594c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	46a7d4aadebd9       6e38f40d628db       7 seconds ago        Running             storage-provisioner       2                   d5f23cf2891c8
	a07413c01f8e5       2ae1ba6417cbc       46 seconds ago       Running             kube-proxy                1                   c775307791fd6
	c8176b7cf5aec       6e38f40d628db       46 seconds ago       Exited              storage-provisioner       1                   d5f23cf2891c8
	3d09f870ce6cd       3a5aa3a515f5d       51 seconds ago       Running             kube-scheduler            1                   613a2136042e2
	ae860e15d0e41       586c112956dfc       51 seconds ago       Running             kube-controller-manager   1                   c8b6e150dfcfc
	9d751a265b494       d521dd763e2e3       51 seconds ago       Running             kube-apiserver            1                   f4cdbd59ab656
	8b0f55d802b01       aebe758cef4cd       51 seconds ago       Running             etcd                      1                   ca715a41b73c4
	0da10eabf430a       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                0                   9e2b4b1800e13
	ed072705134c0       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   ae7511f543c84
	af02fe8a26739       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            0                   06a54abbd12b8
	42d0d44d7c6f1       586c112956dfc       About a minute ago   Exited              kube-controller-manager   0                   d9acb50e1a8c4
	d698d4a205537       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            0                   aeff65b18cdf6
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220801175129-13911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220801175129-13911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93
	                    minikube.k8s.io/name=newest-cni-20220801175129-13911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_08_01T17_51_55_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Aug 2022 00:51:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220801175129-13911
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Aug 2022 00:53:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Aug 2022 00:53:16 +0000   Tue, 02 Aug 2022 00:51:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Aug 2022 00:53:16 +0000   Tue, 02 Aug 2022 00:51:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Aug 2022 00:53:16 +0000   Tue, 02 Aug 2022 00:51:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 02 Aug 2022 00:53:16 +0000   Tue, 02 Aug 2022 00:53:16 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    newest-cni-20220801175129-13911
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115334268Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c192b04687c403f8fbb9bc7975b21b3
	  System UUID:                73c12afa-3566-4b51-b1a4-de54f0cd6723
	  Boot ID:                    71cf565c-fd32-45eb-95e1-c87a7a5ba5a0
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-cs7mc                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     78s
	  kube-system                 etcd-newest-cni-20220801175129-13911                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-newest-cni-20220801175129-13911             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-newest-cni-20220801175129-13911    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-2pmw7                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-newest-cni-20220801175129-13911             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 metrics-server-5c6f97fb75-qwvtt                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 45s                  kube-proxy       
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  102s (x5 over 102s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x5 over 102s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x4 over 102s)  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  91s                  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientPID
	  Normal  NodeReady                91s                  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeReady
	  Normal  Starting                 91s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           80s                  node-controller  Node newest-cni-20220801175129-13911 event: Registered Node newest-cni-20220801175129-13911 in Controller
	  Normal  NodeAllocatableEnforced  53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x7 over 53s)    kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientPID
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10s                  node-controller  Node newest-cni-20220801175129-13911 event: Registered Node newest-cni-20220801175129-13911 in Controller
	  Normal  Starting                 10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s                  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s                  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s                  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             10s                  kubelet          Node newest-cni-20220801175129-13911 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  10s                  kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [8b0f55d802b0] <==
	* {"level":"info","ts":"2022-08-02T00:52:34.356Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-08-02T00:52:34.357Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-08-02T00:52:34.357Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-08-02T00:52:34.357Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-08-02T00:52:34.357Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:52:34.358Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:52:35.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-08-02T00:52:35.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-08-02T00:52:35.549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-08-02T00:52:35.552Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220801175129-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:52:35.551Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:52:35.552Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:52:35.553Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:52:35.556Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:52:35.557Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:52:35.557Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [ed072705134c] <==
	* {"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-08-02T00:51:50.247Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220801175129-13911 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:51:50.248Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-08-02T00:51:50.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-08-02T00:51:50.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-08-02T00:51:50.249Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-08-02T00:51:50.250Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-08-02T00:52:11.085Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-08-02T00:52:11.085Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220801175129-13911","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/08/02 00:52:11 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/08/02 00:52:11 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-08-02T00:52:11.130Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-08-02T00:52:11.132Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:52:11.134Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-08-02T00:52:11.134Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220801175129-13911","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  00:53:26 up  1:18,  0 users,  load average: 1.19, 0.89, 0.87
	Linux newest-cni-20220801175129-13911 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9d751a265b49] <==
	* I0802 00:52:37.550529       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 00:52:37.560051       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 00:52:37.573442       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0802 00:52:37.605246       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 00:52:38.229308       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0802 00:52:38.449501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0802 00:52:38.564266       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:52:38.564302       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 00:52:38.564308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 00:52:38.564338       1 handler_proxy.go:102] no RequestInfo found in the context
	E0802 00:52:38.564362       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 00:52:38.565359       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0802 00:52:38.924952       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 00:52:38.932078       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 00:52:38.960564       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 00:52:39.003450       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 00:52:39.008152       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 00:52:40.313661       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 00:52:40.809737       1 controller.go:611] quota admission added evaluator for: namespaces
	I0802 00:52:40.883865       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.43.5]
	I0802 00:52:40.908312       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.101.179.10]
	I0802 00:53:16.126001       1 controller.go:611] quota admission added evaluator for: endpoints
	I0802 00:53:16.326140       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 00:53:16.489793       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [d698d4a20553] <==
	* W0802 00:52:20.458798       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.468152       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.480173       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.499903       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.506848       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.520035       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.536902       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.583631       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.585379       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.616521       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.622229       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.654284       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.683893       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.685741       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.711923       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.737549       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.768670       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.803625       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.805552       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.810970       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.831948       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.939672       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:20.961435       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:21.019256       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0802 00:52:21.046106       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [42d0d44d7c6f] <==
	* I0802 00:52:07.089965       1 shared_informer.go:262] Caches are synced for deployment
	I0802 00:52:07.110480       1 shared_informer.go:262] Caches are synced for daemon sets
	I0802 00:52:07.128964       1 shared_informer.go:262] Caches are synced for stateful set
	I0802 00:52:07.130129       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0802 00:52:07.131809       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0802 00:52:07.131829       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0802 00:52:07.133044       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0802 00:52:07.149093       1 shared_informer.go:262] Caches are synced for service account
	I0802 00:52:07.178228       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0802 00:52:07.240783       1 shared_informer.go:262] Caches are synced for namespace
	I0802 00:52:07.244822       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:52:07.291938       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:52:07.647190       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:52:07.725554       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0802 00:52:07.731807       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:52:07.731836       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0802 00:52:07.777549       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2pmw7"
	I0802 00:52:08.039729       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-cs7mc"
	I0802 00:52:08.043289       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-mzbjk"
	I0802 00:52:08.183585       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0802 00:52:08.187043       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-mzbjk"
	I0802 00:52:10.476104       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0802 00:52:10.479360       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0802 00:52:10.486241       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0802 00:52:10.491527       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-qwvtt"
	
	* 
	* ==> kube-controller-manager [ae860e15d0e4] <==
	* I0802 00:53:16.106194       1 shared_informer.go:262] Caches are synced for cronjob
	I0802 00:53:16.109696       1 shared_informer.go:262] Caches are synced for ephemeral
	I0802 00:53:16.110820       1 shared_informer.go:262] Caches are synced for HPA
	I0802 00:53:16.128630       1 shared_informer.go:262] Caches are synced for crt configmap
	I0802 00:53:16.199642       1 shared_informer.go:262] Caches are synced for taint
	I0802 00:53:16.199809       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0802 00:53:16.199914       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220801175129-13911. Assuming now as a timestamp.
	I0802 00:53:16.199967       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0802 00:53:16.200272       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0802 00:53:16.200438       1 event.go:294] "Event occurred" object="newest-cni-20220801175129-13911" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220801175129-13911 event: Registered Node newest-cni-20220801175129-13911 in Controller"
	I0802 00:53:16.213212       1 shared_informer.go:262] Caches are synced for daemon sets
	I0802 00:53:16.221775       1 shared_informer.go:262] Caches are synced for attach detach
	I0802 00:53:16.292970       1 shared_informer.go:262] Caches are synced for disruption
	I0802 00:53:16.293004       1 disruption.go:371] Sending events to api server.
	I0802 00:53:16.297716       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:53:16.308535       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 00:53:16.330141       1 shared_informer.go:262] Caches are synced for deployment
	I0802 00:53:16.492166       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0802 00:53:16.494871       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0802 00:53:16.598301       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-k7hpg"
	I0802 00:53:16.601256       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-h9lkj"
	I0802 00:53:16.722889       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:53:16.734206       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 00:53:16.734242       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0802 00:53:21.200784       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	
	* 
	* ==> kube-proxy [0da10eabf430] <==
	* I0802 00:52:08.378446       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:52:08.378587       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:52:08.379581       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:52:08.401386       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:52:08.401426       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:52:08.401433       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:52:08.401443       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:52:08.401525       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:52:08.401707       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:52:08.401907       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:52:08.401936       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:52:08.402361       1 config.go:317] "Starting service config controller"
	I0802 00:52:08.402388       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:52:08.402549       1 config.go:444] "Starting node config controller"
	I0802 00:52:08.402575       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:52:08.402576       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:52:08.402583       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:52:08.503686       1 shared_informer.go:262] Caches are synced for node config
	I0802 00:52:08.520005       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:52:08.520116       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [a07413c01f8e] <==
	* I0802 00:52:40.232914       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0802 00:52:40.232984       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0802 00:52:40.233007       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 00:52:40.311130       1 server_others.go:206] "Using iptables Proxier"
	I0802 00:52:40.311169       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0802 00:52:40.311177       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0802 00:52:40.311186       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0802 00:52:40.311211       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:52:40.311319       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 00:52:40.311433       1 server.go:661] "Version info" version="v1.24.3"
	I0802 00:52:40.311440       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:52:40.312043       1 config.go:317] "Starting service config controller"
	I0802 00:52:40.312125       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 00:52:40.312142       1 config.go:226] "Starting endpoint slice config controller"
	I0802 00:52:40.312145       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 00:52:40.312793       1 config.go:444] "Starting node config controller"
	I0802 00:52:40.312802       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 00:52:40.412494       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 00:52:40.412542       1 shared_informer.go:262] Caches are synced for service config
	I0802 00:52:40.412888       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3d09f870ce6c] <==
	* W0802 00:52:34.439289       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0802 00:52:34.934421       1 serving.go:348] Generated self-signed cert in-memory
	W0802 00:52:37.486093       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 00:52:37.486130       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0802 00:52:37.486153       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 00:52:37.486157       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 00:52:37.513029       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0802 00:52:37.513806       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 00:52:37.515238       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0802 00:52:37.515681       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 00:52:37.515824       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:52:37.515938       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 00:52:37.616689       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [af02fe8a2673] <==
	* W0802 00:51:52.144921       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 00:51:52.144929       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 00:51:52.144935       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 00:51:52.144944       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 00:51:52.144979       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 00:51:52.145098       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 00:51:52.145124       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 00:51:52.145207       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 00:51:52.145223       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 00:51:53.011992       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 00:51:53.012043       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 00:51:53.032116       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 00:51:53.032186       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 00:51:53.114775       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 00:51:53.114813       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 00:51:53.131778       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 00:51:53.131797       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 00:51:53.151647       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 00:51:53.151684       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 00:51:53.343195       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 00:51:53.343234       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0802 00:51:56.238565       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 00:52:11.130458       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0802 00:52:11.130815       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0802 00:52:11.130979       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-08-02 00:52:25 UTC, end at Tue 2022-08-02 00:53:28 UTC. --
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         rpc error: code = Unknown desc = [failed to set up sandbox container "8e9a193bc42aa743aab1b7a74290c0d22df170e62813af09800d994ea0b39e9a" network for pod "dashboard-metrics-scraper-dffd48c4c-k7hpg": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-dffd48c4c-k7hpg_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "8e9a193bc42aa743aab1b7a74290c0d22df170e62813af09800d994ea0b39e9a" network for pod "dashboard-metrics-scraper-dffd48c4c-k7hpg": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-dffd48c4c-k7hpg_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.27 -j CNI-03a4c3ac910cdac51037a40c -m comment --comment name: "crio" id: "8e9a193bc42aa743aab1b7a74290c0d22df170e62813af09800d994ea0b39e9a" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-03a4c3ac910cdac51037a40c':No such file or directory
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         Try `iptables -h' or 'iptables --help' for more information.
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         ]
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-k7hpg"
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:28.473840    3486 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-dffd48c4c-k7hpg_kubernetes-dashboard(55320b3c-784f-4016-b0ef-7e977f9f5e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-dffd48c4c-k7hpg_kubernetes-dashboard(55320b3c-784f-4016-b0ef-7e977f9f5e38)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"8e9a193bc42aa743aab1b7a74290c0d22df170e62813af09800d994ea0b39e9a\\\" network for pod \\\"dashboard-metrics-scraper-dffd48c4c-k7hpg\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-dffd48c4c-k7hpg_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"8e9a193bc42aa743aab1b7a74290c0d22df170e62813af09800d994ea0b39e9a\\\" network for pod \\\"dashboar
d-metrics-scraper-dffd48c4c-k7hpg\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-dffd48c4c-k7hpg_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.27 -j CNI-03a4c3ac910cdac51037a40c -m comment --comment name: \\\"crio\\\" id: \\\"8e9a193bc42aa743aab1b7a74290c0d22df170e62813af09800d994ea0b39e9a\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-03a4c3ac910cdac51037a40c':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-k7hpg" podUID=55320b3c-784f-4016-b0ef-7e977f9f5e38
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:28.733553    3486 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         rpc error: code = Unknown desc = [failed to set up sandbox container "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" network for pod "kubernetes-dashboard-5fd5574d9f-h9lkj": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" network for pod "kubernetes-dashboard-5fd5574d9f-h9lkj": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.28 -j CNI-ccdaf7fbb300cf5645075624 -m comment --comment name: "crio" id: "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ccdaf7fbb30
0cf5645075624':No such file or directory
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         Try `iptables -h' or 'iptables --help' for more information.
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         ]
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:  >
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:28.733588    3486 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         rpc error: code = Unknown desc = [failed to set up sandbox container "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" network for pod "kubernetes-dashboard-5fd5574d9f-h9lkj": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" network for pod "kubernetes-dashboard-5fd5574d9f-h9lkj": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.28 -j CNI-ccdaf7fbb300cf5645075624 -m comment --comment name: "crio" id: "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ccdaf7fbb30
0cf5645075624':No such file or directory
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         Try `iptables -h' or 'iptables --help' for more information.
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         ]
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:  > pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-h9lkj"
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:28.733604    3486 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         rpc error: code = Unknown desc = [failed to set up sandbox container "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" network for pod "kubernetes-dashboard-5fd5574d9f-h9lkj": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" network for pod "kubernetes-dashboard-5fd5574d9f-h9lkj": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.28 -j CNI-ccdaf7fbb300cf5645075624 -m comment --comment name: "crio" id: "68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ccdaf7fbb30
0cf5645075624':No such file or directory
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         Try `iptables -h' or 'iptables --help' for more information.
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:         ]
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]:  > pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-h9lkj"
	Aug 02 00:53:28 newest-cni-20220801175129-13911 kubelet[3486]: E0802 00:53:28.733715    3486 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard(e13c04b8-2836-441e-979d-0e1da26349d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard(e13c04b8-2836-441e-979d-0e1da26349d5)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283\\\" network for pod \\\"kubernetes-dashboard-5fd5574d9f-h9lkj\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283\\\" network for pod \\\"kubernetes-dashboard-5fd
5574d9f-h9lkj\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-5fd5574d9f-h9lkj_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.28 -j CNI-ccdaf7fbb300cf5645075624 -m comment --comment name: \\\"crio\\\" id: \\\"68bec9b6df1d2127f4dd4b47b64b7374826e090475f46e7b13374cb310c2b283\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ccdaf7fbb300cf5645075624':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-h9lkj" podUID=e13c04b8-2836-441e-979d-0e1da26349d5
	
	* 
	* ==> storage-provisioner [46a7d4aadebd] <==
	* I0802 00:53:19.037618       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 00:53:19.047343       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 00:53:19.047431       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [c8176b7cf5ae] <==
	* I0802 00:52:39.315258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0802 00:53:15.607724       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220801175129-13911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-cs7mc metrics-server-5c6f97fb75-qwvtt dashboard-metrics-scraper-dffd48c4c-k7hpg kubernetes-dashboard-5fd5574d9f-h9lkj
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220801175129-13911 describe pod coredns-6d4b75cb6d-cs7mc metrics-server-5c6f97fb75-qwvtt dashboard-metrics-scraper-dffd48c4c-k7hpg kubernetes-dashboard-5fd5574d9f-h9lkj
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220801175129-13911 describe pod coredns-6d4b75cb6d-cs7mc metrics-server-5c6f97fb75-qwvtt dashboard-metrics-scraper-dffd48c4c-k7hpg kubernetes-dashboard-5fd5574d9f-h9lkj: exit status 1 (228.157186ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-cs7mc" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-qwvtt" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-k7hpg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-h9lkj" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220801175129-13911 describe pod coredns-6d4b75cb6d-cs7mc metrics-server-5c6f97fb75-qwvtt dashboard-metrics-scraper-dffd48c4c-k7hpg kubernetes-dashboard-5fd5574d9f-h9lkj: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (48.49s)

                                                
                                    

Test pass (249/289)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 75.51
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.24.3/json-events 6.94
11 TestDownloadOnly/v1.24.3/preload-exists 0
14 TestDownloadOnly/v1.24.3/kubectl 0
15 TestDownloadOnly/v1.24.3/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.74
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.43
18 TestDownloadOnlyKic 7.26
19 TestBinaryMirror 1.7
20 TestOffline 49.99
22 TestAddons/Setup 126.95
26 TestAddons/parallel/MetricsServer 5.64
27 TestAddons/parallel/HelmTiller 11.17
29 TestAddons/parallel/CSI 48.79
30 TestAddons/parallel/Headlamp 10.27
32 TestAddons/serial/GCPAuth 15.28
33 TestAddons/StoppedEnableDisable 13
34 TestCertOptions 32.73
35 TestCertExpiration 238.98
36 TestDockerFlags 32.71
37 TestForceSystemdFlag 33.77
38 TestForceSystemdEnv 32.88
40 TestHyperKitDriverInstallOrUpdate 6.88
43 TestErrorSpam/setup 27.43
44 TestErrorSpam/start 2.41
45 TestErrorSpam/status 1.32
46 TestErrorSpam/pause 1.89
47 TestErrorSpam/unpause 1.93
48 TestErrorSpam/stop 13.15
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 43.68
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 45.06
55 TestFunctional/serial/KubeContext 0.03
56 TestFunctional/serial/KubectlGetPods 1.63
59 TestFunctional/serial/CacheCmd/cache/add_remote 5.35
60 TestFunctional/serial/CacheCmd/cache/add_local 1.87
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
62 TestFunctional/serial/CacheCmd/cache/list 0.08
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.45
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.6
65 TestFunctional/serial/CacheCmd/cache/delete 0.15
66 TestFunctional/serial/MinikubeKubectlCmd 0.5
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.64
68 TestFunctional/serial/ExtraConfig 45.55
69 TestFunctional/serial/ComponentHealth 0.05
70 TestFunctional/serial/LogsCmd 3.32
71 TestFunctional/serial/LogsFileCmd 3.21
73 TestFunctional/parallel/ConfigCmd 0.46
74 TestFunctional/parallel/DashboardCmd 13.56
75 TestFunctional/parallel/DryRun 1.71
76 TestFunctional/parallel/InternationalLanguage 0.64
77 TestFunctional/parallel/StatusCmd 1.39
80 TestFunctional/parallel/ServiceCmd 13.44
82 TestFunctional/parallel/AddonsCmd 0.29
83 TestFunctional/parallel/PersistentVolumeClaim 26.97
85 TestFunctional/parallel/SSHCmd 0.95
86 TestFunctional/parallel/CpCmd 1.69
87 TestFunctional/parallel/MySQL 23.78
88 TestFunctional/parallel/FileSync 0.49
89 TestFunctional/parallel/CertSync 2.77
93 TestFunctional/parallel/NodeLabels 0.05
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
97 TestFunctional/parallel/Version/short 0.14
98 TestFunctional/parallel/Version/components 0.66
99 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
100 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
103 TestFunctional/parallel/ImageCommands/ImageBuild 3.34
104 TestFunctional/parallel/ImageCommands/Setup 2.02
105 TestFunctional/parallel/DockerEnv/bash 1.73
106 TestFunctional/parallel/UpdateContextCmd/no_changes 0.32
107 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.4
108 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.48
110 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.4
111 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.53
112 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.89
113 TestFunctional/parallel/ImageCommands/ImageRemove 0.84
114 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.91
115 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.52
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.71
117 TestFunctional/parallel/ProfileCmd/profile_list 0.53
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.66
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.18
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/MountCmd/any-port 9.52
130 TestFunctional/parallel/MountCmd/specific-port 2.89
131 TestFunctional/delete_addon-resizer_images 0.17
132 TestFunctional/delete_my-image_image 0.07
133 TestFunctional/delete_minikube_cached_images 0.07
143 TestJSONOutput/start/Command 42.35
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.66
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.66
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 12.37
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.76
168 TestKicCustomNetwork/create_custom_network 31.64
169 TestKicCustomNetwork/use_default_bridge_network 29.65
170 TestKicExistingNetwork 29.61
171 TestKicCustomSubnet 30.33
172 TestMainNoArgs 0.07
173 TestMinikubeProfile 64.54
176 TestMountStart/serial/StartWithMountFirst 7.42
177 TestMountStart/serial/VerifyMountFirst 0.43
178 TestMountStart/serial/StartWithMountSecond 7.72
179 TestMountStart/serial/VerifyMountSecond 0.44
180 TestMountStart/serial/DeleteFirst 2.28
181 TestMountStart/serial/VerifyMountPostDelete 0.43
182 TestMountStart/serial/Stop 1.62
183 TestMountStart/serial/RestartStopped 5.26
184 TestMountStart/serial/VerifyMountPostStop 0.43
187 TestMultiNode/serial/FreshStart2Nodes 97.82
188 TestMultiNode/serial/DeployApp2Nodes 6.29
189 TestMultiNode/serial/PingHostFrom2Pods 0.85
190 TestMultiNode/serial/AddNode 25.95
191 TestMultiNode/serial/ProfileList 0.52
192 TestMultiNode/serial/CopyFile 16.46
193 TestMultiNode/serial/StopNode 14.17
194 TestMultiNode/serial/StartAfterStop 19.91
195 TestMultiNode/serial/RestartKeepsNodes 112.15
196 TestMultiNode/serial/DeleteNode 18.7
197 TestMultiNode/serial/StopMultiNode 25.1
198 TestMultiNode/serial/RestartMultiNode 74.51
199 TestMultiNode/serial/ValidateNameConflict 30.28
205 TestScheduledStopUnix 102.28
206 TestSkaffold 61.33
208 TestInsufficientStorage 12.73
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.5
225 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.71
226 TestStoppedBinaryUpgrade/Setup 0.75
228 TestStoppedBinaryUpgrade/MinikubeLogs 3.55
230 TestPause/serial/Start 44.49
231 TestPause/serial/SecondStartNoReconfiguration 39.43
232 TestPause/serial/Pause 0.75
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.37
243 TestNoKubernetes/serial/StartWithK8s 28.85
244 TestNoKubernetes/serial/StartWithStopK8s 17.32
245 TestNoKubernetes/serial/Start 6.64
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
247 TestNoKubernetes/serial/ProfileList 29.2
248 TestNoKubernetes/serial/Stop 1.62
249 TestNoKubernetes/serial/StartNoArgs 4.22
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.51
251 TestNetworkPlugins/group/auto/Start 43.6
252 TestNetworkPlugins/group/auto/KubeletFlags 0.45
253 TestNetworkPlugins/group/auto/NetCatPod 12.76
254 TestNetworkPlugins/group/auto/DNS 0.12
255 TestNetworkPlugins/group/auto/Localhost 0.1
256 TestNetworkPlugins/group/auto/HairPin 5.12
257 TestNetworkPlugins/group/kindnet/Start 50.59
258 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
259 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
260 TestNetworkPlugins/group/kindnet/NetCatPod 11.68
261 TestNetworkPlugins/group/kindnet/DNS 0.12
262 TestNetworkPlugins/group/kindnet/Localhost 0.11
263 TestNetworkPlugins/group/kindnet/HairPin 0.11
264 TestNetworkPlugins/group/cilium/Start 92.68
265 TestNetworkPlugins/group/calico/Start 74.27
266 TestNetworkPlugins/group/cilium/ControllerPod 5.02
267 TestNetworkPlugins/group/cilium/KubeletFlags 0.49
268 TestNetworkPlugins/group/cilium/NetCatPod 13.56
269 TestNetworkPlugins/group/cilium/DNS 0.13
270 TestNetworkPlugins/group/cilium/Localhost 0.11
271 TestNetworkPlugins/group/cilium/HairPin 0.12
272 TestNetworkPlugins/group/false/Start 46.65
273 TestNetworkPlugins/group/calico/ControllerPod 5.02
274 TestNetworkPlugins/group/calico/KubeletFlags 0.52
275 TestNetworkPlugins/group/calico/NetCatPod 11.66
276 TestNetworkPlugins/group/calico/DNS 0.13
277 TestNetworkPlugins/group/calico/Localhost 0.12
278 TestNetworkPlugins/group/calico/HairPin 0.11
279 TestNetworkPlugins/group/bridge/Start 81.04
280 TestNetworkPlugins/group/false/KubeletFlags 0.48
281 TestNetworkPlugins/group/false/NetCatPod 11.82
282 TestNetworkPlugins/group/false/DNS 0.12
283 TestNetworkPlugins/group/false/Localhost 0.11
284 TestNetworkPlugins/group/false/HairPin 5.12
285 TestNetworkPlugins/group/enable-default-cni/Start 45.21
286 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.52
287 TestNetworkPlugins/group/bridge/KubeletFlags 0.87
288 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.72
289 TestNetworkPlugins/group/bridge/NetCatPod 12.71
290 TestNetworkPlugins/group/bridge/DNS 0.12
291 TestNetworkPlugins/group/bridge/Localhost 0.11
292 TestNetworkPlugins/group/bridge/HairPin 0.12
293 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
294 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
295 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
296 TestNetworkPlugins/group/kubenet/Start 47.01
299 TestNetworkPlugins/group/kubenet/KubeletFlags 0.45
300 TestNetworkPlugins/group/kubenet/NetCatPod 12.65
301 TestNetworkPlugins/group/kubenet/DNS 0.12
302 TestNetworkPlugins/group/kubenet/Localhost 0.1
305 TestStartStop/group/embed-certs/serial/FirstStart 44.89
306 TestStartStop/group/embed-certs/serial/DeployApp 9.72
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.75
308 TestStartStop/group/embed-certs/serial/Stop 12.56
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.38
310 TestStartStop/group/embed-certs/serial/SecondStart 291.47
313 TestStartStop/group/old-k8s-version/serial/Stop 1.66
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.38
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.02
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.92
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.47
321 TestStartStop/group/no-preload/serial/FirstStart 55.35
322 TestStartStop/group/no-preload/serial/DeployApp 9.73
323 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
324 TestStartStop/group/no-preload/serial/Stop 12.64
325 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.38
326 TestStartStop/group/no-preload/serial/SecondStart 299.47
328 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.02
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.6
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.47
333 TestStartStop/group/default-k8s-different-port/serial/FirstStart 52.95
334 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.72
335 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.76
336 TestStartStop/group/default-k8s-different-port/serial/Stop 12.54
337 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.37
338 TestStartStop/group/default-k8s-different-port/serial/SecondStart 299.18
339 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 29.02
340 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.55
341 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.49
345 TestStartStop/group/newest-cni/serial/FirstStart 40.34
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
348 TestStartStop/group/newest-cni/serial/Stop 12.59
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.38
350 TestStartStop/group/newest-cni/serial/SecondStart 18
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.47
x
+
TestDownloadOnly/v1.16.0/json-events (75.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220801163356-13911 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220801163356-13911 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (1m15.511933312s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (75.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220801163356-13911
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220801163356-13911: exit status 85 (294.159013ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220801163356-13911 | jenkins | v1.26.0 | 01 Aug 22 16:33 PDT |          |
	|         | download-only-20220801163356-13911 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 16:33:56
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 16:33:56.680688   13913 out.go:296] Setting OutFile to fd 1 ...
	I0801 16:33:56.680879   13913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:33:56.680885   13913 out.go:309] Setting ErrFile to fd 2...
	I0801 16:33:56.680889   13913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:33:56.680982   13913 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	W0801 16:33:56.681076   13913 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/config/config.json: no such file or directory
	I0801 16:33:56.681763   13913 out.go:303] Setting JSON to true
	I0801 16:33:56.696550   13913 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5607,"bootTime":1659391229,"procs":343,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 16:33:56.696671   13913 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 16:33:56.718222   13913 out.go:97] [download-only-20220801163356-13911] minikube v1.26.0 on Darwin 12.5
	I0801 16:33:56.718384   13913 notify.go:193] Checking for updates...
	W0801 16:33:56.718470   13913 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball: no such file or directory
	I0801 16:33:56.740199   13913 out.go:169] MINIKUBE_LOCATION=14695
	I0801 16:33:56.762995   13913 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 16:33:56.784035   13913 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 16:33:56.805339   13913 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 16:33:56.827326   13913 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	W0801 16:33:56.869991   13913 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0801 16:33:56.870262   13913 driver.go:365] Setting default libvirt URI to qemu:///system
	W0801 16:34:56.222923   13913 docker.go:113] docker version returned error: deadline exceeded running "docker version --format {{.Server.Os}}-{{.Server.Version}}": signal: killed
	I0801 16:34:56.245090   13913 out.go:97] Using the docker driver based on user configuration
	I0801 16:34:56.245108   13913 start.go:284] selected driver: docker
	I0801 16:34:56.245117   13913 start.go:808] validating driver "docker" against <nil>
	I0801 16:34:56.245205   13913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 16:34:56.378945   13913 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 16:34:56.400898   13913 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0801 16:34:56.421666   13913 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0801 16:34:56.463831   13913 out.go:169] 
	W0801 16:34:56.484692   13913 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0801 16:34:56.505805   13913 out.go:169] 
	I0801 16:34:56.547602   13913 out.go:169] 
	W0801 16:34:56.568868   13913 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0801 16:34:56.568998   13913 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0801 16:34:56.569065   13913 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0801 16:34:56.589775   13913 out.go:169] 
	I0801 16:34:56.610695   13913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 16:34:56.735344   13913 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0801 16:34:56.756817   13913 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0801 16:34:56.756865   13913 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0801 16:34:56.800818   13913 out.go:169] 
	W0801 16:34:56.821876   13913 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0801 16:34:56.821956   13913 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0801 16:34:56.821988   13913 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0801 16:34:56.842549   13913 out.go:169] 
	I0801 16:34:56.884830   13913 out.go:169] 
	W0801 16:34:56.905683   13913 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0801 16:34:56.926757   13913 out.go:169] 
	I0801 16:34:56.947785   13913 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0801 16:34:56.947989   13913 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0801 16:34:56.968668   13913 out.go:169] Using Docker Desktop driver with root privileges
	I0801 16:34:56.989795   13913 cni.go:95] Creating CNI manager for ""
	I0801 16:34:56.989812   13913 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 16:34:56.989829   13913 start_flags.go:310] config:
	{Name:download-only-20220801163356-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220801163356-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 16:34:57.010793   13913 out.go:97] Starting control plane node download-only-20220801163356-13911 in cluster download-only-20220801163356-13911
	I0801 16:34:57.010844   13913 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 16:34:57.031579   13913 out.go:97] Pulling base image ...
	I0801 16:34:57.031639   13913 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 16:34:57.031676   13913 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 16:34:57.031850   13913 cache.go:107] acquiring lock: {Name:mkce27c207a7bf01881de4cf2e18a8ec061785d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:34:57.031852   13913 cache.go:107] acquiring lock: {Name:mkfc0907b62ced692f882f0eb93744a36506348f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:34:57.031936   13913 cache.go:107] acquiring lock: {Name:mk6f37f014cd0844e60dc9643585431560cd3d80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:34:57.033879   13913 cache.go:107] acquiring lock: {Name:mk53eec39e62da2caab673025ed0e99d5b9df463 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:34:57.033884   13913 cache.go:107] acquiring lock: {Name:mkd795dbd76e065a9a8799da9c653b0a9a6e30c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:34:57.034161   13913 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0801 16:34:57.034166   13913 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0801 16:34:57.034163   13913 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0801 16:34:57.034254   13913 cache.go:107] acquiring lock: {Name:mk137689c32aafd28670c53412de44d61875a82f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:34:57.034327   13913 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0801 16:34:57.034409   13913 cache.go:107] acquiring lock: {Name:mkc1311664ab34b8b38cf6f487141fdbeb468cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:34:57.034433   13913 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0801 16:34:57.034620   13913 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0801 16:34:57.034702   13913 cache.go:107] acquiring lock: {Name:mka9a7cd25343e5fd862f117061161f7975b0db3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 16:34:57.034594   13913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/download-only-20220801163356-13911/config.json ...
	I0801 16:34:57.034650   13913 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 16:34:57.034875   13913 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0801 16:34:57.034816   13913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/download-only-20220801163356-13911/config.json: {Name:mk3f6cf93c910d48b1fec98d099b94f35aa20084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 16:34:57.035504   13913 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0801 16:34:57.035979   13913 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0801 16:34:57.035988   13913 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0801 16:34:57.035999   13913 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0801 16:34:57.039838   13913 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0801 16:34:57.041022   13913 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0801 16:34:57.041090   13913 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0801 16:34:57.041872   13913 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0801 16:34:57.042008   13913 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0801 16:34:57.042079   13913 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0801 16:34:57.042204   13913 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0801 16:34:57.042579   13913 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0801 16:34:57.094873   13913 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local cache
	I0801 16:34:57.095096   13913 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local cache directory
	I0801 16:34:57.095215   13913 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local cache
	I0801 16:34:57.839421   13913 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0801 16:34:57.857674   13913 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0801 16:34:57.860003   13913 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0801 16:34:57.874497   13913 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0801 16:34:57.881852   13913 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0801 16:34:57.904134   13913 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0801 16:34:58.009319   13913 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0801 16:34:58.009401   13913 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0801 16:34:58.009418   13913 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 977.515829ms
	I0801 16:34:58.009429   13913 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0801 16:34:58.010879   13913 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0801 16:34:58.339356   13913 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0801 16:34:58.339372   13913 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.307507349s
	I0801 16:34:58.339384   13913 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0801 16:34:59.656438   13913 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0801 16:34:59.656461   13913 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 2.621989393s
	I0801 16:34:59.656471   13913 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0801 16:35:01.122085   13913 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0801 16:35:01.334903   13913 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0801 16:35:01.334919   13913 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 4.302874522s
	I0801 16:35:01.334928   13913 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0801 16:35:01.459542   13913 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0801 16:35:01.459558   13913 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 4.425204154s
	I0801 16:35:01.459567   13913 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0801 16:35:01.823770   13913 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0801 16:35:01.823791   13913 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 4.791778792s
	I0801 16:35:01.823800   13913 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0801 16:35:01.925154   13913 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0801 16:35:01.925176   13913 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 4.893217934s
	I0801 16:35:01.925193   13913 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0801 16:35:02.976181   13913 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0801 16:35:02.976199   13913 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 5.942268247s
	I0801 16:35:02.976207   13913 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0801 16:35:02.976222   13913 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220801163356-13911"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/json-events (6.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220801163356-13911 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220801163356-13911 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=docker --driver=docker : (6.93623684s)
--- PASS: TestDownloadOnly/v1.24.3/json-events (6.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/preload-exists
--- PASS: TestDownloadOnly/v1.24.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/kubectl
--- PASS: TestDownloadOnly/v1.24.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220801163356-13911
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220801163356-13911: exit status 85 (290.417792ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220801163356-13911 | jenkins | v1.26.0 | 01 Aug 22 16:33 PDT |          |
	|         | download-only-20220801163356-13911 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	| start   | -o=json --download-only -p         | download-only-20220801163356-13911 | jenkins | v1.26.0 | 01 Aug 22 16:35 PDT |          |
	|         | download-only-20220801163356-13911 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.24.3       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 16:35:12
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 16:35:12.728866   15388 out.go:296] Setting OutFile to fd 1 ...
	I0801 16:35:12.729047   15388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:35:12.729052   15388 out.go:309] Setting ErrFile to fd 2...
	I0801 16:35:12.729056   15388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:35:12.729159   15388 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	W0801 16:35:12.729254   15388 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/config/config.json: no such file or directory
	I0801 16:35:12.729628   15388 out.go:303] Setting JSON to true
	I0801 16:35:12.744771   15388 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5683,"bootTime":1659391229,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 16:35:12.744860   15388 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 16:35:12.767328   15388 out.go:97] [download-only-20220801163356-13911] minikube v1.26.0 on Darwin 12.5
	I0801 16:35:12.767538   15388 notify.go:193] Checking for updates...
	W0801 16:35:12.767565   15388 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball: no such file or directory
	I0801 16:35:12.789113   15388 out.go:169] MINIKUBE_LOCATION=14695
	I0801 16:35:12.810757   15388 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 16:35:12.832222   15388 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 16:35:12.854378   15388 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 16:35:12.876049   15388 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	W0801 16:35:12.919920   15388 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0801 16:35:12.920577   15388 config.go:180] Loaded profile config "download-only-20220801163356-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0801 16:35:12.920662   15388 start.go:716] api.Load failed for download-only-20220801163356-13911: filestore "download-only-20220801163356-13911": Docker machine "download-only-20220801163356-13911" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0801 16:35:12.920741   15388 driver.go:365] Setting default libvirt URI to qemu:///system
	W0801 16:35:12.920774   15388 start.go:716] api.Load failed for download-only-20220801163356-13911: filestore "download-only-20220801163356-13911": Docker machine "download-only-20220801163356-13911" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0801 16:35:12.991760   15388 docker.go:137] docker version: linux-20.10.17
	I0801 16:35:12.991875   15388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 16:35:13.126753   15388 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-08-01 23:35:13.050328866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 16:35:13.147885   15388 out.go:97] Using the docker driver based on existing profile
	I0801 16:35:13.147928   15388 start.go:284] selected driver: docker
	I0801 16:35:13.147943   15388 start.go:808] validating driver "docker" against &{Name:download-only-20220801163356-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220801163356-13911 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 16:35:13.148192   15388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 16:35:13.286303   15388 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-08-01 23:35:13.212004436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 16:35:13.288430   15388 cni.go:95] Creating CNI manager for ""
	I0801 16:35:13.288451   15388 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0801 16:35:13.288466   15388 start_flags.go:310] config:
	{Name:download-only-20220801163356-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:download-only-20220801163356-13911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 16:35:13.309695   15388 out.go:97] Starting control plane node download-only-20220801163356-13911 in cluster download-only-20220801163356-13911
	I0801 16:35:13.309806   15388 cache.go:120] Beginning downloading kic base image for docker with docker
	I0801 16:35:13.331615   15388 out.go:97] Pulling base image ...
	I0801 16:35:13.331720   15388 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 16:35:13.331811   15388 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 16:35:13.394371   15388 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local cache
	I0801 16:35:13.394534   15388 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local cache directory
	I0801 16:35:13.394549   15388 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local cache directory, skipping pull
	I0801 16:35:13.394555   15388 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in cache, skipping pull
	I0801 16:35:13.394563   15388 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 as a tarball
	I0801 16:35:13.409347   15388 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.3/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0801 16:35:13.409408   15388 cache.go:57] Caching tarball of preloaded images
	I0801 16:35:13.409758   15388 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0801 16:35:13.431498   15388 out.go:97] Downloading Kubernetes v1.24.3 preload ...
	I0801 16:35:13.431600   15388 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 ...
	I0801 16:35:13.535352   15388 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.3/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4?checksum=md5:ae1c8e7b1fa116b4699d7551d3812287 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220801163356-13911"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.74s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.74s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220801163356-13911
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.26s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220801163521-13911 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220801163521-13911 --force --alsologtostderr --driver=docker : (6.098293367s)
helpers_test.go:175: Cleaning up "download-docker-20220801163521-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220801163521-13911
--- PASS: TestDownloadOnlyKic (7.26s)

                                                
                                    
x
+
TestBinaryMirror (1.7s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220801163528-13911 --alsologtostderr --binary-mirror http://127.0.0.1:55211 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220801163528-13911 --alsologtostderr --binary-mirror http://127.0.0.1:55211 --driver=docker : (1.031076468s)
helpers_test.go:175: Cleaning up "binary-mirror-20220801163528-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220801163528-13911
--- PASS: TestBinaryMirror (1.70s)

                                                
                                    
x
+
TestOffline (49.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220801171037-13911 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220801171037-13911 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (47.113871545s)
helpers_test.go:175: Cleaning up "offline-docker-20220801171037-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220801171037-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220801171037-13911: (2.877681263s)
--- PASS: TestOffline (49.99s)

                                                
                                    
x
+
TestAddons/Setup (126.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220801163530-13911 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220801163530-13911 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m6.945439222s)
--- PASS: TestAddons/Setup (126.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 2.080227ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-8595bd7d4c-pntlr" [4f83214e-572a-471d-af5a-75a60d61dbc9] Running
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007155414s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220801163530-13911 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220801163530-13911 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.64s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.861192ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-x7fl8" [469af38f-9dbb-4a6c-aba8-da14397ef57c] Running
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008895472s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220801163530-13911 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220801163530-13911 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.681144762s)
addons_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220801163530-13911 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.17s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 4.031126ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220801163530-13911 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:516: (dbg) Done: kubectl --context addons-20220801163530-13911 create -f testdata/csi-hostpath-driver/pvc.yaml: (3.077555863s)
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220801163530-13911 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220801163530-13911 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [5f1008ab-0f05-4439-a1f9-a56a3fd6bd87] Pending
helpers_test.go:342: "task-pv-pod" [5f1008ab-0f05-4439-a1f9-a56a3fd6bd87] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [5f1008ab-0f05-4439-a1f9-a56a3fd6bd87] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.00966862s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220801163530-13911 create -f testdata/csi-hostpath-driver/snapshot.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220801163530-13911 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:425: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220801163530-13911 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220801163530-13911 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220801163530-13911 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220801163530-13911 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220801163530-13911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220801163530-13911 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [65639adf-0c20-4947-998d-f51fd64a043f] Pending
helpers_test.go:342: "task-pv-pod-restore" [65639adf-0c20-4947-998d-f51fd64a043f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [65639adf-0c20-4947-998d-f51fd64a043f] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.010188328s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220801163530-13911 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220801163530-13911 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220801163530-13911 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220801163530-13911 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220801163530-13911 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.860551186s)
addons_test.go:594: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220801163530-13911 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-20220801163530-13911 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-20220801163530-13911 --alsologtostderr -v=1: (1.255324665s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-xkb7k" [7be3b0ba-95d3-44d8-875a-da5240bb4fc5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-866f5bd7bc-xkb7k" [7be3b0ba-95d3-44d8-875a-da5240bb4fc5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.009144271s
--- PASS: TestAddons/parallel/Headlamp (10.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (15.28s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220801163530-13911 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220801163530-13911 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [37eb6f83-55cf-49b8-8578-7665b26c6441] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [37eb6f83-55cf-49b8-8578-7665b26c6441] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.008886955s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220801163530-13911 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220801163530-13911 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-20220801163530-13911 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220801163530-13911 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220801163530-13911 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220801163530-13911 addons disable gcp-auth --alsologtostderr -v=1: (6.664679443s)
--- PASS: TestAddons/serial/GCPAuth (15.28s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220801163530-13911
addons_test.go:134: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220801163530-13911: (12.572207213s)
addons_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220801163530-13911
addons_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220801163530-13911
--- PASS: TestAddons/StoppedEnableDisable (13.00s)

                                                
                                    
x
+
TestCertOptions (32.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220801171209-13911 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220801171209-13911 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (28.879036976s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220801171209-13911 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220801171209-13911 -- "sudo cat /etc/kubernetes/admin.conf"
E0801 17:12:39.097284   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "cert-options-20220801171209-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220801171209-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220801171209-13911: (2.879186612s)
--- PASS: TestCertOptions (32.73s)

                                                
                                    
x
+
TestCertExpiration (238.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220801171201-13911 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220801171201-13911 --memory=2048 --cert-expiration=3m --driver=docker : (29.12924341s)
E0801 17:12:37.461189   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220801171201-13911 --memory=2048 --cert-expiration=8760h --driver=docker 
E0801 17:15:32.456498   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:15:40.512024   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 17:15:52.937147   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220801171201-13911 --memory=2048 --cert-expiration=8760h --driver=docker : (27.120609224s)
helpers_test.go:175: Cleaning up "cert-expiration-20220801171201-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220801171201-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220801171201-13911: (2.731412535s)
--- PASS: TestCertExpiration (238.98s)

                                                
                                    
x
+
TestDockerFlags (32.71s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220801171136-13911 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220801171136-13911 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (28.77380068s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220801171136-13911 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220801171136-13911 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220801171136-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220801171136-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220801171136-13911: (2.879688988s)
--- PASS: TestDockerFlags (32.71s)

                                                
                                    
x
+
TestForceSystemdFlag (33.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220801171127-13911 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220801171127-13911 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (30.350519382s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220801171127-13911 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220801171127-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220801171127-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220801171127-13911: (2.839896806s)
--- PASS: TestForceSystemdFlag (33.77s)

                                                
                                    
x
+
TestForceSystemdEnv (32.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220801171104-13911 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220801171104-13911 --memory=2048 --alsologtostderr -v=5 --driver=docker : (29.521449647s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220801171104-13911 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220801171104-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220801171104-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220801171104-13911: (2.770124527s)
--- PASS: TestForceSystemdEnv (32.88s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.88s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.88s)

                                                
                                    
x
+
TestErrorSpam/setup (27.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220801163908-13911 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220801163908-13911 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 --driver=docker : (27.433527182s)
--- PASS: TestErrorSpam/setup (27.43s)

                                                
                                    
x
+
TestErrorSpam/start (2.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 start --dry-run
--- PASS: TestErrorSpam/start (2.41s)

                                                
                                    
x
+
TestErrorSpam/status (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 status
--- PASS: TestErrorSpam/status (1.32s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (13.15s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 stop: (12.476578103s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220801163908-13911 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220801163908-13911 stop
--- PASS: TestErrorSpam/stop (13.15s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/test/nested/copy/13911/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (43.679191444s)
--- PASS: TestFunctional/serial/StartWithProxy (43.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --alsologtostderr -v=8: (45.060716234s)
functional_test.go:655: soft start took 45.061201666s for "functional-20220801163958-13911" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220801163958-13911 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220801163958-13911 get po -A: (1.631150361s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache add k8s.gcr.io/pause:3.1: (1.263096064s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache add k8s.gcr.io/pause:3.3: (1.94707203s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache add k8s.gcr.io/pause:latest: (2.133912544s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220801163958-13911 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4043996594/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache add minikube-local-cache-test:functional-20220801163958-13911
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache add minikube-local-cache-test:functional-20220801163958-13911: (1.341717412s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache delete minikube-local-cache-test:functional-20220801163958-13911
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220801163958-13911
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (426.556144ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 cache reload: (1.257241418s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 kubectl -- --context functional-20220801163958-13911 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220801163958-13911 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.64s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.552805977s)
functional_test.go:753: restart took 45.552990642s for "functional-20220801163958-13911" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220801163958-13911 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 logs: (3.315047233s)
--- PASS: TestFunctional/serial/LogsCmd (3.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd721572149/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd721572149/001/logs.txt: (3.204213873s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220801163958-13911 config get cpus: exit status 14 (51.432858ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220801163958-13911 config get cpus: exit status 14 (56.042714ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220801163958-13911 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220801163958-13911 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 17762: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (822.364276ms)

                                                
                                                
-- stdout --
	* [functional-20220801163958-13911] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 16:43:38.570724   17653 out.go:296] Setting OutFile to fd 1 ...
	I0801 16:43:38.570886   17653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:43:38.570893   17653 out.go:309] Setting ErrFile to fd 2...
	I0801 16:43:38.570897   17653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:43:38.571024   17653 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 16:43:38.571482   17653 out.go:303] Setting JSON to false
	I0801 16:43:38.586498   17653 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":6189,"bootTime":1659391229,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 16:43:38.586616   17653 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 16:43:38.608316   17653 out.go:177] * [functional-20220801163958-13911] minikube v1.26.0 on Darwin 12.5
	I0801 16:43:38.671253   17653 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 16:43:38.714359   17653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 16:43:38.757424   17653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 16:43:38.801622   17653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 16:43:38.902927   17653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 16:43:38.924857   17653 config.go:180] Loaded profile config "functional-20220801163958-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 16:43:38.925550   17653 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 16:43:38.997329   17653 docker.go:137] docker version: linux-20.10.17
	I0801 16:43:38.997569   17653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 16:43:39.138229   17653 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-01 23:43:39.07926649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 16:43:39.197081   17653 out.go:177] * Using the docker driver based on existing profile
	I0801 16:43:39.233984   17653 start.go:284] selected driver: docker
	I0801 16:43:39.234017   17653 start.go:808] validating driver "docker" against &{Name:functional-20220801163958-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220801163958-13911 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 16:43:39.234141   17653 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 16:43:39.256893   17653 out.go:177] 
	W0801 16:43:39.278109   17653 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0801 16:43:39.299230   17653 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220801163958-13911 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (635.218681ms)

                                                
                                                
-- stdout --
	* [functional-20220801163958-13911] minikube v1.26.0 sur Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 16:43:29.661551   17458 out.go:296] Setting OutFile to fd 1 ...
	I0801 16:43:29.661679   17458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:43:29.661684   17458 out.go:309] Setting ErrFile to fd 2...
	I0801 16:43:29.661687   17458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:43:29.661805   17458 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 16:43:29.662214   17458 out.go:303] Setting JSON to false
	I0801 16:43:29.678681   17458 start.go:115] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":6180,"bootTime":1659391229,"procs":339,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0801 16:43:29.678753   17458 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0801 16:43:29.702641   17458 out.go:177] * [functional-20220801163958-13911] minikube v1.26.0 sur Darwin 12.5
	I0801 16:43:29.744832   17458 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 16:43:29.766829   17458 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 16:43:29.788794   17458 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0801 16:43:29.830536   17458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 16:43:29.872587   17458 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 16:43:29.893726   17458 config.go:180] Loaded profile config "functional-20220801163958-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 16:43:29.894047   17458 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 16:43:29.964219   17458 docker.go:137] docker version: linux-20.10.17
	I0801 16:43:29.964375   17458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 16:43:30.100998   17458 info.go:265] docker info: {ID:6EC6:JPLU:RBE2:YGXQ:7MKM:SZ2E:QGJY:D2A4:RMXP:2SNY:SIQG:GTNP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-01 23:43:30.037543163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 16:43:30.142849   17458 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0801 16:43:30.163931   17458 start.go:284] selected driver: docker
	I0801 16:43:30.163964   17458 start.go:808] validating driver "docker" against &{Name:functional-20220801163958-13911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220801163958-13911 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 16:43:30.164174   17458 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 16:43:30.187927   17458 out.go:177] 
	W0801 16:43:30.208981   17458 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0801 16:43:30.229909   17458 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 status
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220801163958-13911 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220801163958-13911 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-gqjrw" [0e5c465e-7b0b-41f3-9917-4cc8b3aaa18f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-gqjrw" [0e5c465e-7b0b-41f3-9917-4cc8b3aaa18f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.006840628s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 service list: (1.221767772s)
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 service --namespace=default --https --url hello-node: (2.044575351s)
functional_test.go:1475: found endpoint: https://127.0.0.1:57069
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 service hello-node --url --format={{.IP}}: (2.043014411s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 service hello-node --url: (2.027651799s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:57126
--- PASS: TestFunctional/parallel/ServiceCmd (13.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [021bc851-5801-42e7-8315-ffa2a1ef5529] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00746969s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220801163958-13911 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220801163958-13911 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220801163958-13911 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220801163958-13911 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [23a80984-225a-4a75-85de-0bb20102f47f] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [23a80984-225a-4a75-85de-0bb20102f47f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [23a80984-225a-4a75-85de-0bb20102f47f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.008463554s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220801163958-13911 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220801163958-13911 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220801163958-13911 delete -f testdata/storage-provisioner/pod.yaml: (1.242267583s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220801163958-13911 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [e1b1c7db-476b-4f29-a33d-9d2f992e4a09] Pending
helpers_test.go:342: "sp-pod" [e1b1c7db-476b-4f29-a33d-9d2f992e4a09] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [e1b1c7db-476b-4f29-a33d-9d2f992e4a09] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009596565s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220801163958-13911 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh -n functional-20220801163958-13911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 cp functional-20220801163958-13911:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd4105425859/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh -n functional-20220801163958-13911 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220801163958-13911 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-h6ftl" [e6295c5d-a6a6-41e2-9f21-8cc21e1ce025] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-h6ftl" [e6295c5d-a6a6-41e2-9f21-8cc21e1ce025] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.016728036s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801163958-13911 exec mysql-67f7d69d8b-h6ftl -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220801163958-13911 exec mysql-67f7d69d8b-h6ftl -- mysql -ppassword -e "show databases;": exit status 1 (169.527294ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801163958-13911 exec mysql-67f7d69d8b-h6ftl -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220801163958-13911 exec mysql-67f7d69d8b-h6ftl -- mysql -ppassword -e "show databases;": exit status 1 (108.25001ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801163958-13911 exec mysql-67f7d69d8b-h6ftl -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220801163958-13911 exec mysql-67f7d69d8b-h6ftl -- mysql -ppassword -e "show databases;": exit status 1 (111.050452ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801163958-13911 exec mysql-67f7d69d8b-h6ftl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/13911/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo cat /etc/test/nested/copy/13911/hosts"
E0801 16:42:38.703832   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/13911.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo cat /etc/ssl/certs/13911.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/13911.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo cat /usr/share/ca-certificates/13911.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/139112.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo cat /etc/ssl/certs/139112.pem"
E0801 16:42:37.426253   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:42:37.432121   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:42:37.442524   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:42:37.462613   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/139112.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo cat /usr/share/ca-certificates/139112.pem"
E0801 16:42:37.502961   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:42:37.583069   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:42:37.743208   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0801 16:42:38.063447   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220801163958-13911 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo systemctl is-active crio": exit status 1 (421.865615ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 version --short
--- PASS: TestFunctional/parallel/Version/short (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220801163958-13911
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| k8s.gcr.io/kube-controller-manager          | v1.24.3                         | 586c112956dfc | 119MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>                          | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220801163958-13911 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-20220801163958-13911 | 61df70930bee6 | 30B    |
| docker.io/library/nginx                     | latest                          | 670dcc86b69df | 142MB  |
| k8s.gcr.io/kube-apiserver                   | v1.24.3                         | d521dd763e2e3 | 130MB  |
| k8s.gcr.io/etcd                             | 3.5.3-0                         | aebe758cef4cd | 299MB  |
| k8s.gcr.io/pause                            | 3.7                             | 221177c6082a8 | 711kB  |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine                          | e46bcc6975310 | 23.5MB |
| docker.io/library/mysql                     | 5.7                             | 3147495b3a5ce | 431MB  |
| k8s.gcr.io/kube-proxy                       | v1.24.3                         | 2ae1ba6417cbc | 110MB  |
| k8s.gcr.io/kube-scheduler                   | v1.24.3                         | 3a5aa3a515f5d | 51MB   |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| docker.io/localhost/my-image                | functional-20220801163958-13911 | 2286085e7937c | 1.24MB |
|---------------------------------------------|---------------------------------|---------------|--------|
2022/08/01 16:43:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls --format json:
[{"id":"61df70930bee6f93618781c9b3a935dfebf7bd07ad773683d38ff5f97142c251","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220801163958-13911"],"size":"30"},{"id":"3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"431000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.3"],"size":"130000000"},{"id":"221177c6082a88ea4f6240ab2450d540955ac6f4d5454
f0e15751b653ebda165","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"711000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"2286085e7937cd7ac4873b1a2402d0176c9e76775b5f700bb68504979c1a1952","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220801163958-13911"],"size":"1240000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":[],"repoTags":["k8s.gcr.io
/etcd:3.5.3-0"],"size":"299000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23500000"},{"id":"2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.3"],"size":"110000000"},{"id":"3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.3"],"size":"51000000"},{"id":"586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.3"],"size":"119000000"},{"id":"ffd4cfbbe753e62419e129e
e2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220801163958-13911"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls --format yaml:
- id: d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.3
size: "130000000"
- id: 586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.3
size: "119000000"
- id: 2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.3
size: "110000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "299000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.3
size: "51000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23500000"
- id: 3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "431000000"
- id: 221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.7
size: "711000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 61df70930bee6f93618781c9b3a935dfebf7bd07ad773683d38ff5f97142c251
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220801163958-13911
size: "30"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh pgrep buildkitd: exit status 1 (446.07527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image build -t localhost/my-image:functional-20220801163958-13911 testdata/build
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image build -t localhost/my-image:functional-20220801163958-13911 testdata/build: (2.549118036s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image build -t localhost/my-image:functional-20220801163958-13911 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 9d49412a11f8
Removing intermediate container 9d49412a11f8
---> aa100e76fc33
Step 3/3 : ADD content.txt /
---> 2286085e7937
Successfully built 2286085e7937
Successfully tagged localhost/my-image:functional-20220801163958-13911
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.946506749s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220801163958-13911 docker-env) && out/minikube-darwin-amd64 status -p functional-20220801163958-13911"
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220801163958-13911 docker-env) && out/minikube-darwin-amd64 status -p functional-20220801163958-13911": (1.038564984s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220801163958-13911 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801163958-13911

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801163958-13911: (3.065935978s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
E0801 16:42:39.984226   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801163958-13911: (2.080143053s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0801 16:42:42.544561   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.202525948s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
E0801 16:42:47.664870   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801163958-13911: (3.865190469s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image save gcr.io/google-containers/addon-resizer:functional-20220801163958-13911 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image save gcr.io/google-containers/addon-resizer:functional-20220801163958-13911 /Users/jenkins/workspace/addon-resizer-save.tar: (1.891820944s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image rm gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image load /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.569600367s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220801163958-13911 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220801163958-13911: (2.385830834s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list
E0801 16:42:57.905559   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "458.620929ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "74.49598ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1361: Took "520.081975ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "135.516102ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220801163958-13911 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220801163958-13911 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [77e3eee4-f89a-4446-af87-d36f5b2a1183] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [77e3eee4-f89a-4446-af87-d36f5b2a1183] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.009401129s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220801163958-13911 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220801163958-13911 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 17405: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220801163958-13911 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3143829056/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1659397410255046000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3143829056/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1659397410255046000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3143829056/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1659397410255046000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3143829056/001/test-1659397410255046000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (420.18407ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  1 23:43 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  1 23:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  1 23:43 test-1659397410255046000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh cat /mount-9p/test-1659397410255046000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220801163958-13911 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [c4a8497d-f49a-4610-8c17-ab001e80ab8d] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [c4a8497d-f49a-4610-8c17-ab001e80ab8d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [c4a8497d-f49a-4610-8c17-ab001e80ab8d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [c4a8497d-f49a-4610-8c17-ab001e80ab8d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.011729092s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220801163958-13911 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220801163958-13911 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3143829056/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220801163958-13911 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2755237452/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (541.691824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220801163958-13911 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2755237452/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh "sudo umount -f /mount-9p": exit status 1 (470.772404ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220801163958-13911 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220801163958-13911 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2755237452/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.89s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220801163958-13911
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220801163958-13911
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220801163958-13911
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220801165115-13911 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220801165115-13911 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (42.350167635s)
--- PASS: TestJSONOutput/start/Command (42.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220801165115-13911 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220801165115-13911 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220801165115-13911 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220801165115-13911 --output=json --user=testUser: (12.369781019s)
--- PASS: TestJSONOutput/stop/Command (12.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220801165213-13911 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220801165213-13911 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (323.777495ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"231a8a1d-14ee-484e-8dd0-b042dd625715","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220801165213-13911] minikube v1.26.0 on Darwin 12.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e7719f7-c9d7-473c-8a61-79d4995f68da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14695"}}
	{"specversion":"1.0","id":"f0e2bb38-2e45-40e5-8954-057319507b31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig"}}
	{"specversion":"1.0","id":"7703533e-03f9-4d7f-9d39-3cc679c7a621","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"00a493a8-9686-4b8d-85a5-aaf01c1fbd97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8372134e-7413-4180-9d77-a861b0b2fce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube"}}
	{"specversion":"1.0","id":"7951855d-ac57-40ec-b9e2-b5a04dc2b4d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220801165213-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220801165213-13911
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220801165214-13911 --network=
E0801 16:52:37.423837   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:52:39.060336   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220801165214-13911 --network=: (28.861368311s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220801165214-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220801165214-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220801165214-13911: (2.711597041s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.64s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220801165246-13911 --network=bridge
E0801 16:53:06.762318   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220801165246-13911 --network=bridge: (27.061766466s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220801165246-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220801165246-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220801165246-13911: (2.524356247s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.65s)

                                                
                                    
x
+
TestKicExistingNetwork (29.61s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220801165315-13911 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220801165315-13911 --network=existing-network: (26.666911443s)
helpers_test.go:175: Cleaning up "existing-network-20220801165315-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220801165315-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220801165315-13911: (2.53318451s)
--- PASS: TestKicExistingNetwork (29.61s)

                                                
                                    
x
+
TestKicCustomSubnet (30.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220801165345-13911 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220801165345-13911 --subnet=192.168.60.0/24: (27.547011204s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220801165345-13911 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220801165345-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220801165345-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220801165345-13911: (2.720161957s)
--- PASS: TestKicCustomSubnet (30.33s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (64.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220801165415-13911 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220801165415-13911 --driver=docker : (28.009296304s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220801165415-13911 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220801165415-13911 --driver=docker : (29.029537759s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220801165415-13911
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220801165415-13911
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220801165415-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220801165415-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220801165415-13911: (2.733279382s)
helpers_test.go:175: Cleaning up "first-20220801165415-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220801165415-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220801165415-13911: (2.737449632s)
--- PASS: TestMinikubeProfile (64.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220801165520-13911 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220801165520-13911 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.417635248s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220801165520-13911 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220801165520-13911 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220801165520-13911 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.715220296s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220801165520-13911 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220801165520-13911 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220801165520-13911 --alsologtostderr -v=5: (2.279962877s)
--- PASS: TestMountStart/serial/DeleteFirst (2.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220801165520-13911 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220801165520-13911
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220801165520-13911: (1.623416004s)
--- PASS: TestMountStart/serial/Stop (1.62s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220801165520-13911
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220801165520-13911: (4.256806634s)
--- PASS: TestMountStart/serial/RestartStopped (5.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220801165520-13911 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220801165548-13911 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220801165548-13911 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m37.058220456s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.749246261s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- rollout status deployment/busybox: (3.096289185s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-4sb9v -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-dqp6q -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-4sb9v -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-dqp6q -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-4sb9v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-dqp6q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-4sb9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-4sb9v -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-dqp6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220801165548-13911 -- exec busybox-d46db594c-dqp6q -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220801165548-13911 -v 3 --alsologtostderr
E0801 16:57:37.429241   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 16:57:39.064471   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220801165548-13911 -v 3 --alsologtostderr: (24.850515206s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr: (1.100736806s)
--- PASS: TestMultiNode/serial/AddNode (25.95s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.52s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --output json --alsologtostderr: (1.090751019s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp testdata/cp-test.txt multinode-20220801165548-13911:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1661437678/001/cp-test_multinode-20220801165548-13911.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911:/home/docker/cp-test.txt multinode-20220801165548-13911-m02:/home/docker/cp-test_multinode-20220801165548-13911_multinode-20220801165548-13911-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m02 "sudo cat /home/docker/cp-test_multinode-20220801165548-13911_multinode-20220801165548-13911-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911:/home/docker/cp-test.txt multinode-20220801165548-13911-m03:/home/docker/cp-test_multinode-20220801165548-13911_multinode-20220801165548-13911-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m03 "sudo cat /home/docker/cp-test_multinode-20220801165548-13911_multinode-20220801165548-13911-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp testdata/cp-test.txt multinode-20220801165548-13911-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1661437678/001/cp-test_multinode-20220801165548-13911-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911-m02:/home/docker/cp-test.txt multinode-20220801165548-13911:/home/docker/cp-test_multinode-20220801165548-13911-m02_multinode-20220801165548-13911.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911 "sudo cat /home/docker/cp-test_multinode-20220801165548-13911-m02_multinode-20220801165548-13911.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911-m02:/home/docker/cp-test.txt multinode-20220801165548-13911-m03:/home/docker/cp-test_multinode-20220801165548-13911-m02_multinode-20220801165548-13911-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m03 "sudo cat /home/docker/cp-test_multinode-20220801165548-13911-m02_multinode-20220801165548-13911-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp testdata/cp-test.txt multinode-20220801165548-13911-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1661437678/001/cp-test_multinode-20220801165548-13911-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911-m03:/home/docker/cp-test.txt multinode-20220801165548-13911:/home/docker/cp-test_multinode-20220801165548-13911-m03_multinode-20220801165548-13911.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911 "sudo cat /home/docker/cp-test_multinode-20220801165548-13911-m03_multinode-20220801165548-13911.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 cp multinode-20220801165548-13911-m03:/home/docker/cp-test.txt multinode-20220801165548-13911-m02:/home/docker/cp-test_multinode-20220801165548-13911-m03_multinode-20220801165548-13911-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 ssh -n multinode-20220801165548-13911-m02 "sudo cat /home/docker/cp-test_multinode-20220801165548-13911-m03_multinode-20220801165548-13911-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 node stop m03: (12.501509899s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status: exit status 7 (832.392333ms)

                                                
                                                
-- stdout --
	multinode-20220801165548-13911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220801165548-13911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220801165548-13911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr: exit status 7 (831.436979ms)

                                                
                                                
-- stdout --
	multinode-20220801165548-13911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220801165548-13911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220801165548-13911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 16:58:30.262467   21107 out.go:296] Setting OutFile to fd 1 ...
	I0801 16:58:30.262692   21107 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:58:30.262697   21107 out.go:309] Setting ErrFile to fd 2...
	I0801 16:58:30.262701   21107 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 16:58:30.262815   21107 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 16:58:30.263006   21107 out.go:303] Setting JSON to false
	I0801 16:58:30.263022   21107 mustload.go:65] Loading cluster: multinode-20220801165548-13911
	I0801 16:58:30.263335   21107 config.go:180] Loaded profile config "multinode-20220801165548-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 16:58:30.263345   21107 status.go:253] checking status of multinode-20220801165548-13911 ...
	I0801 16:58:30.263722   21107 cli_runner.go:164] Run: docker container inspect multinode-20220801165548-13911 --format={{.State.Status}}
	I0801 16:58:30.333460   21107 status.go:328] multinode-20220801165548-13911 host status = "Running" (err=<nil>)
	I0801 16:58:30.333502   21107 host.go:66] Checking if "multinode-20220801165548-13911" exists ...
	I0801 16:58:30.333791   21107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220801165548-13911
	I0801 16:58:30.404208   21107 host.go:66] Checking if "multinode-20220801165548-13911" exists ...
	I0801 16:58:30.404507   21107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 16:58:30.404556   21107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220801165548-13911
	I0801 16:58:30.475169   21107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59202 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/multinode-20220801165548-13911/id_rsa Username:docker}
	I0801 16:58:30.557775   21107 ssh_runner.go:195] Run: systemctl --version
	I0801 16:58:30.561982   21107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 16:58:30.571370   21107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220801165548-13911
	I0801 16:58:30.642709   21107 kubeconfig.go:92] found "multinode-20220801165548-13911" server: "https://127.0.0.1:59201"
	I0801 16:58:30.642735   21107 api_server.go:165] Checking apiserver status ...
	I0801 16:58:30.642772   21107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 16:58:30.652415   21107 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1657/cgroup
	W0801 16:58:30.660280   21107 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1657/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0801 16:58:30.660294   21107 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59201/healthz ...
	I0801 16:58:30.666385   21107 api_server.go:266] https://127.0.0.1:59201/healthz returned 200:
	ok
	I0801 16:58:30.666399   21107 status.go:419] multinode-20220801165548-13911 apiserver status = Running (err=<nil>)
	I0801 16:58:30.666406   21107 status.go:255] multinode-20220801165548-13911 status: &{Name:multinode-20220801165548-13911 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0801 16:58:30.666423   21107 status.go:253] checking status of multinode-20220801165548-13911-m02 ...
	I0801 16:58:30.666659   21107 cli_runner.go:164] Run: docker container inspect multinode-20220801165548-13911-m02 --format={{.State.Status}}
	I0801 16:58:30.738016   21107 status.go:328] multinode-20220801165548-13911-m02 host status = "Running" (err=<nil>)
	I0801 16:58:30.738037   21107 host.go:66] Checking if "multinode-20220801165548-13911-m02" exists ...
	I0801 16:58:30.738292   21107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220801165548-13911-m02
	I0801 16:58:30.809311   21107 host.go:66] Checking if "multinode-20220801165548-13911-m02" exists ...
	I0801 16:58:30.809598   21107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 16:58:30.809639   21107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220801165548-13911-m02
	I0801 16:58:30.880709   21107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59330 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/multinode-20220801165548-13911-m02/id_rsa Username:docker}
	I0801 16:58:30.962507   21107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 16:58:30.971418   21107 status.go:255] multinode-20220801165548-13911-m02 status: &{Name:multinode-20220801165548-13911-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0801 16:58:30.971445   21107 status.go:253] checking status of multinode-20220801165548-13911-m03 ...
	I0801 16:58:30.971688   21107 cli_runner.go:164] Run: docker container inspect multinode-20220801165548-13911-m03 --format={{.State.Status}}
	I0801 16:58:31.042537   21107 status.go:328] multinode-20220801165548-13911-m03 host status = "Stopped" (err=<nil>)
	I0801 16:58:31.042558   21107 status.go:341] host is not running, skipping remaining checks
	I0801 16:58:31.042564   21107 status.go:255] multinode-20220801165548-13911-m03 status: &{Name:multinode-20220801165548-13911-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 node start m03 --alsologtostderr: (18.697334128s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status: (1.098448248s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (112.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220801165548-13911
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220801165548-13911
E0801 16:59:00.477744   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220801165548-13911: (36.948946504s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220801165548-13911 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220801165548-13911 --wait=true -v=8 --alsologtostderr: (1m15.092890394s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220801165548-13911
--- PASS: TestMultiNode/serial/RestartKeepsNodes (112.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 node delete m03: (16.386472229s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.434876991s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 stop: (24.735716042s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status: exit status 7 (180.663319ms)

                                                
                                                
-- stdout --
	multinode-20220801165548-13911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220801165548-13911-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr: exit status 7 (180.687435ms)

                                                
                                                
-- stdout --
	multinode-20220801165548-13911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220801165548-13911-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 17:01:26.775093   21747 out.go:296] Setting OutFile to fd 1 ...
	I0801 17:01:26.775265   21747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:01:26.775271   21747 out.go:309] Setting ErrFile to fd 2...
	I0801 17:01:26.775275   21747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 17:01:26.775385   21747 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 17:01:26.775552   21747 out.go:303] Setting JSON to false
	I0801 17:01:26.775567   21747 mustload.go:65] Loading cluster: multinode-20220801165548-13911
	I0801 17:01:26.775857   21747 config.go:180] Loaded profile config "multinode-20220801165548-13911": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0801 17:01:26.775869   21747 status.go:253] checking status of multinode-20220801165548-13911 ...
	I0801 17:01:26.776227   21747 cli_runner.go:164] Run: docker container inspect multinode-20220801165548-13911 --format={{.State.Status}}
	I0801 17:01:26.840233   21747 status.go:328] multinode-20220801165548-13911 host status = "Stopped" (err=<nil>)
	I0801 17:01:26.840258   21747 status.go:341] host is not running, skipping remaining checks
	I0801 17:01:26.840264   21747 status.go:255] multinode-20220801165548-13911 status: &{Name:multinode-20220801165548-13911 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0801 17:01:26.840308   21747 status.go:253] checking status of multinode-20220801165548-13911-m02 ...
	I0801 17:01:26.841425   21747 cli_runner.go:164] Run: docker container inspect multinode-20220801165548-13911-m02 --format={{.State.Status}}
	I0801 17:01:26.904844   21747 status.go:328] multinode-20220801165548-13911-m02 host status = "Stopped" (err=<nil>)
	I0801 17:01:26.904868   21747 status.go:341] host is not running, skipping remaining checks
	I0801 17:01:26.904876   21747 status.go:255] multinode-20220801165548-13911-m02 status: &{Name:multinode-20220801165548-13911-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (74.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220801165548-13911 --wait=true -v=8 --alsologtostderr --driver=docker 
E0801 17:02:37.431676   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220801165548-13911 --wait=true -v=8 --alsologtostderr --driver=docker : (1m11.739346717s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220801165548-13911 status --alsologtostderr
E0801 17:02:39.068774   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.897165206s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (74.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220801165548-13911
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220801165548-13911-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220801165548-13911-m02 --driver=docker : exit status 14 (376.092016ms)

                                                
                                                
-- stdout --
	* [multinode-20220801165548-13911-m02] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220801165548-13911-m02' is duplicated with machine name 'multinode-20220801165548-13911-m02' in profile 'multinode-20220801165548-13911'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220801165548-13911-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220801165548-13911-m03 --driver=docker : (26.603974733s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220801165548-13911
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220801165548-13911: exit status 80 (528.429828ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220801165548-13911
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220801165548-13911-m03 already exists in multinode-20220801165548-13911-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220801165548-13911-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220801165548-13911-m03: (2.712856239s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.28s)

                                                
                                    
x
+
TestScheduledStopUnix (102.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220801170741-13911 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220801170741-13911 --memory=2048 --driver=docker : (27.876977851s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220801170741-13911 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220801170741-13911 -n scheduled-stop-20220801170741-13911
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220801170741-13911 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220801170741-13911 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220801170741-13911 -n scheduled-stop-20220801170741-13911
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220801170741-13911
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220801170741-13911 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220801170741-13911
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220801170741-13911: exit status 7 (118.885828ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220801170741-13911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220801170741-13911 -n scheduled-stop-20220801170741-13911
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220801170741-13911 -n scheduled-stop-20220801170741-13911: exit status 7 (116.712744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220801170741-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220801170741-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220801170741-13911: (2.401354217s)
--- PASS: TestScheduledStopUnix (102.28s)

                                                
                                    
x
+
TestSkaffold (61.33s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3515324228 version
skaffold_test.go:63: skaffold version: v1.39.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220801170923-13911 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220801170923-13911 --memory=2600 --driver=docker : (28.137784852s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3515324228 run --minikube-profile skaffold-20220801170923-13911 --kube-context skaffold-20220801170923-13911 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3515324228 run --minikube-profile skaffold-20220801170923-13911 --kube-context skaffold-20220801170923-13911 --status-check=true --port-forward=false --interactive=false: (18.295416049s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-f9dcd9df9-qc8mt" [ed07ae5f-4598-43f9-a92d-84c6fbfe3faa] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013718048s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-767d874d96-w8kdf" [ac6167a0-1fbf-48b3-ad5f-3045bb45c144] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008988109s
helpers_test.go:175: Cleaning up "skaffold-20220801170923-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220801170923-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220801170923-13911: (3.017789947s)
--- PASS: TestSkaffold (61.33s)

                                                
                                    
x
+
TestInsufficientStorage (12.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220801171024-13911 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220801171024-13911 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.35736552s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"419f9e64-5bf2-412b-92a9-d038706c1dcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220801171024-13911] minikube v1.26.0 on Darwin 12.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f53bceee-0b04-4295-9231-2f1b3f240200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14695"}}
	{"specversion":"1.0","id":"0929e7bd-6a0c-41ca-a00a-cab52fe674d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig"}}
	{"specversion":"1.0","id":"3de20d3b-e375-44e6-a320-fd9342bdd416","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"54692184-4ca5-45f4-b179-59bc6e2769d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3cdbe79c-e304-4714-bb5f-41be4be188b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube"}}
	{"specversion":"1.0","id":"94d4a772-038a-4a99-863e-c35f015b469e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c88a2bc7-3e6b-45e6-8262-72ccf95d491e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b67de3f5-cccd-4f23-9645-2154f56aa6c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c3cd275-9a2e-4453-ab1e-b424a08c352d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"ea312ad5-d23d-405b-9d34-a8a03026740a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220801171024-13911 in cluster insufficient-storage-20220801171024-13911","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec7277d8-4f41-49f7-897d-430c1bdb5962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d9adb54-b2c8-46c7-8a1b-21c655266744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3f77365-43bd-45c3-950a-d7f9f9ad6141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220801171024-13911 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220801171024-13911 --output=json --layout=cluster: exit status 7 (422.635732ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220801171024-13911","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220801171024-13911","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:10:34.725051   23447 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220801171024-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220801171024-13911 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220801171024-13911 --output=json --layout=cluster: exit status 7 (421.666437ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220801171024-13911","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220801171024-13911","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 17:10:35.147616   23457 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220801171024-13911" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	E0801 17:10:35.155869   23457 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/insufficient-storage-20220801171024-13911/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220801171024-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220801171024-13911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220801171024-13911: (2.522516925s)
--- PASS: TestInsufficientStorage (12.73s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.5s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14695
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3266574173/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3266574173/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3266574173/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3266574173/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14695
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2080508358/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2080508358/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2080508358/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2080508358/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220801171600-13911
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220801171600-13911: (3.554279437s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.55s)

                                                
                                    
x
+
TestPause/serial/Start (44.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220801171654-13911 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0801 17:17:37.464954   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220801171654-13911 --memory=2048 --install-addons=false --wait=all --driver=docker : (44.486131997s)
--- PASS: TestPause/serial/Start (44.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220801171654-13911 --alsologtostderr -v=1 --driver=docker 
E0801 17:17:39.100663   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
E0801 17:17:55.819503   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220801171654-13911 --alsologtostderr -v=1 --driver=docker : (39.413519161s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220801171654-13911 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (366.942722ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220801171923-13911] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14695
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --driver=docker : (28.310420546s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220801171923-13911 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --no-kubernetes --driver=docker : (14.338692436s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220801171923-13911 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220801171923-13911 status -o json: exit status 2 (443.837617ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220801171923-13911","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220801171923-13911
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220801171923-13911: (2.532450505s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --no-kubernetes --driver=docker 
E0801 17:20:11.944818   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --no-kubernetes --driver=docker : (6.64016572s)
--- PASS: TestNoKubernetes/serial/Start (6.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220801171923-13911 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220801171923-13911 "sudo systemctl is-active --quiet service kubelet": exit status 1 (420.443575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (16.414200876s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
E0801 17:20:39.691608   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:20:42.195411   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (12.784586756s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220801171923-13911
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220801171923-13911: (1.619138642s)
--- PASS: TestNoKubernetes/serial/Stop (1.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220801171923-13911 --driver=docker : (4.21814865s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220801171923-13911 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220801171923-13911 "sudo systemctl is-active --quiet service kubelet": exit status 1 (510.25232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220801171037-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220801171037-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (43.598533864s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220801171037-13911 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220801171037-13911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220801171037-13911 replace --force -f testdata/netcat-deployment.yaml: (1.7230166s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-xzgd2" [ee4efcdd-fa84-4a2b-87d8-2f6b0c6a42a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-xzgd2" [ee4efcdd-fa84-4a2b-87d8-2f6b0c6a42a2] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.010131847s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220801171037-13911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.116359079s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220801171038-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E0801 17:22:37.495556   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 17:22:39.132339   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220801171038-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (50.589123005s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-lf9dq" [2f84881b-2448-4554-983d-7e7a08de6f1a] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013072603s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220801171038-13911 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220801171038-13911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220801171038-13911 replace --force -f testdata/netcat-deployment.yaml: (1.648537404s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-jbcfj" [52294cae-ad6b-41ce-b3f3-23b90ce66441] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-jbcfj" [52294cae-ad6b-41ce-b3f3-23b90ce66441] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00806852s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220801171038-13911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (92.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220801171038-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220801171038-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m32.676867299s)
--- PASS: TestNetworkPlugins/group/cilium/Start (92.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220801171038-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220801171038-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m14.265536811s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-pjct9" [620fab15-0c15-488a-9731-e0d07d0763b9] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017526622s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220801171038-13911 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220801171038-13911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220801171038-13911 replace --force -f testdata/netcat-deployment.yaml: (2.523287902s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-jbst8" [b21e5438-0dad-41b6-ae6d-eaa3efa8af98] Pending
helpers_test.go:342: "netcat-869c55b6dc-jbst8" [b21e5438-0dad-41b6-ae6d-eaa3efa8af98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-jbst8" [b21e5438-0dad-41b6-ae6d-eaa3efa8af98] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.010603768s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220801171038-13911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (46.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220801171038-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E0801 17:25:11.949388   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220801171038-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (46.653237427s)
--- PASS: TestNetworkPlugins/group/false/Start (46.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-92lvd" [6da3cb76-f0df-4a92-9284-11a75284f630] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015510374s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220801171038-13911 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220801171038-13911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context calico-20220801171038-13911 replace --force -f testdata/netcat-deployment.yaml: (1.617768683s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-zct8x" [77790800-3064-42ed-b5b6-7efa48765eed] Pending
helpers_test.go:342: "netcat-869c55b6dc-zct8x" [77790800-3064-42ed-b5b6-7efa48765eed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-zct8x" [77790800-3064-42ed-b5b6-7efa48765eed] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.010469365s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220801171038-13911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220801171037-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220801171037-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m21.042696469s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220801171038-13911 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220801171038-13911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220801171038-13911 replace --force -f testdata/netcat-deployment.yaml: (1.782239565s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-pdr52" [39df8ebf-e552-49de-96ff-0b32aca9f9a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-pdr52" [39df8ebf-e552-49de-96ff-0b32aca9f9a1] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.007798611s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220801171038-13911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220801171038-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.115066256s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (45.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220801171037-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0801 17:26:40.514025   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:40.519122   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:40.529863   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:40.550022   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:40.590660   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:40.670885   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:40.831719   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:41.151919   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:41.792091   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:43.073057   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:45.634280   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:26:50.754700   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220801171037-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (45.206474369s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (45.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220801171037-13911 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220801171037-13911 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220801171037-13911 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220801171037-13911 replace --force -f testdata/netcat-deployment.yaml: (2.567843543s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-zfhpc" [c8d173a8-9ff1-4908-8165-85f744d6c4c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-zfhpc" [c8d173a8-9ff1-4908-8165-85f744d6c4c9] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.088275915s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220801171037-13911 replace --force -f testdata/netcat-deployment.yaml
E0801 17:27:00.995095   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context bridge-20220801171037-13911 replace --force -f testdata/netcat-deployment.yaml: (2.674729946s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-kn59g" [cdfbf3f0-80f0-4bba-8818-79457f193ccc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-kn59g" [cdfbf3f0-80f0-4bba-8818-79457f193ccc] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.008018461s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801171037-13911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801171037-13911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (47.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220801171037-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220801171037-13911 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (47.01341391s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (47.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220801171037-13911 "pgrep -a kubelet"
E0801 17:28:02.436694   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220801171037-13911 replace --force -f testdata/netcat-deployment.yaml: (1.612115845s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-67bkx" [1bcda64a-138f-408c-a72c-5c2f72f227a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-67bkx" [1bcda64a-138f-408c-a72c-5c2f72f227a0] Running
E0801 17:28:10.805723   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.009117313s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220801171037-13911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220801172918-13911 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3
E0801 17:29:24.359473   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:29:43.333768   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:43.339083   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:43.349698   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:43.370126   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:43.410511   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:43.490632   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:43.652310   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:43.973038   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:44.613262   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:45.895267   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:48.455622   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:29:53.577985   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220801172918-13911 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3: (44.890985652s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220801172918-13911 create -f testdata/busybox.yaml
E0801 17:30:03.819452   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) Done: kubectl --context embed-certs-20220801172918-13911 create -f testdata/busybox.yaml: (1.602417303s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [97820899-0a2d-4339-9c1a-a3a861da1e19] Pending
helpers_test.go:342: "busybox" [97820899-0a2d-4339-9c1a-a3a861da1e19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [97820899-0a2d-4339-9c1a-a3a861da1e19] Running
E0801 17:30:11.954422   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.013592843s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220801172918-13911 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220801172918-13911 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220801172918-13911 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220801172918-13911 --alsologtostderr -v=3
E0801 17:30:16.987170   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:16.993144   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:17.004426   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:17.025952   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:17.066730   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:17.147114   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:17.309300   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:17.629530   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:18.269792   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:19.549956   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:22.110439   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:24.300150   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220801172918-13911 --alsologtostderr -v=3: (12.560749544s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911: exit status 7 (118.231112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220801172918-13911 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (291.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220801172918-13911 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3
E0801 17:30:27.230677   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:34.174828   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:37.470950   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:54.758035   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:54.764531   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:54.775421   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:54.795679   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:54.835926   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:54.917313   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:55.077573   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:55.397955   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:56.038308   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:57.318451   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:57.953510   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:30:59.880741   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:31:05.001619   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory
E0801 17:31:05.262158   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:31:15.243988   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220801172918-13911 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3: (4m50.923779932s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220801172918-13911 -n embed-certs-20220801172918-13911
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (291.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220801172716-13911 --alsologtostderr -v=3
E0801 17:33:00.837600   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220801172716-13911 --alsologtostderr -v=3: (1.660416834s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220801172716-13911 -n old-k8s-version-20220801172716-13911: exit status 7 (120.228438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220801172716-13911 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-8fcx8" [0d867994-9c56-41dc-9234-3dd9bbe748ef] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-8fcx8" [0d867994-9c56-41dc-9234-3dd9bbe748ef] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.015592069s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-8fcx8" [0d867994-9c56-41dc-9234-3dd9bbe748ef] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006535957s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220801172918-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context embed-certs-20220801172918-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.917452079s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220801172918-13911 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220801173626-13911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3
E0801 17:36:40.570230   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801171037-13911/client.crt: no such file or directory
E0801 17:37:01.330269   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:37:02.188569   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220801173626-13911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3: (55.349927983s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220801173626-13911 create -f testdata/busybox.yaml
E0801 17:37:22.259032   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) Done: kubectl --context no-preload-20220801173626-13911 create -f testdata/busybox.yaml: (1.599245947s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5d872b78-1a27-4dca-8e30-3d020b692ef8] Pending
helpers_test.go:342: "busybox" [5d872b78-1a27-4dca-8e30-3d020b692ef8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5d872b78-1a27-4dca-8e30-3d020b692ef8] Running
E0801 17:37:29.075377   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/enable-default-cni-20220801171037-13911/client.crt: no such file or directory
E0801 17:37:29.919403   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/bridge-20220801171037-13911/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.015023007s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220801173626-13911 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220801173626-13911 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220801173626-13911 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220801173626-13911 --alsologtostderr -v=3
E0801 17:37:37.555943   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory
E0801 17:37:39.192568   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801163958-13911/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220801173626-13911 --alsologtostderr -v=3: (12.641974419s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911: exit status 7 (117.712665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220801173626-13911 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (299.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220801173626-13911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3
E0801 17:37:50.375152   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory
E0801 17:38:04.515665   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:38:32.204191   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubenet-20220801171037-13911/client.crt: no such file or directory
E0801 17:39:43.393112   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
E0801 17:40:12.011740   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory
E0801 17:40:17.043454   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801171038-13911/client.crt: no such file or directory
E0801 17:40:54.815556   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/false-20220801171038-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220801173626-13911 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3: (4m58.965423251s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220801173626-13911 -n no-preload-20220801173626-13911
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (299.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-j8vz8" [5d36ef6e-3081-4a75-a775-d906fc182113] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-j8vz8" [5d36ef6e-3081-4a75-a775-d906fc182113] Running
E0801 17:42:50.379633   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801171038-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.013948044s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-j8vz8" [5d36ef6e-3081-4a75-a775-d906fc182113] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008820897s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220801173626-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context no-preload-20220801173626-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.590719075s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220801173626-13911 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (52.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220801174348-13911 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220801174348-13911 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3: (52.949682409s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (52.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220801174348-13911 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Done: kubectl --context default-k8s-different-port-20220801174348-13911 create -f testdata/busybox.yaml: (1.595657023s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0801 17:44:43.395619   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801171038-13911/client.crt: no such file or directory
helpers_test.go:342: "busybox" [4c40b5a3-3191-4fd0-973f-4b4700f8ad35] Pending
helpers_test.go:342: "busybox" [4c40b5a3-3191-4fd0-973f-4b4700f8ad35] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [4c40b5a3-3191-4fd0-973f-4b4700f8ad35] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.012891215s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220801174348-13911 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220801174348-13911 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220801174348-13911 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220801174348-13911 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220801174348-13911 --alsologtostderr -v=3: (12.539027002s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911: exit status 7 (119.161441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220801174348-13911 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (299.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220801174348-13911 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3
E0801 17:45:12.014368   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220801174348-13911 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3: (4m58.723156242s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220801174348-13911 -n default-k8s-different-port-20220801174348-13911
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (299.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (29.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-49lj4" [180a2c1b-6569-45b1-8704-8dd02927b1bd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0801 17:50:07.054942   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801173626-13911/client.crt: no such file or directory
E0801 17:50:12.020906   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/skaffold-20220801170923-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-49lj4" [180a2c1b-6569-45b1-8704-8dd02927b1bd] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 29.015793615s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (29.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-49lj4" [180a2c1b-6569-45b1-8704-8dd02927b1bd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00615476s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220801174348-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context default-k8s-different-port-20220801174348-13911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.542522508s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220801174348-13911 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220801175129-13911 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220801175129-13911 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3: (40.343459942s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220801175129-13911 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220801175129-13911 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220801175129-13911 --alsologtostderr -v=3: (12.588321969s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911: exit status 7 (121.904064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220801175129-13911 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220801175129-13911 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220801175129-13911 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3: (17.492596923s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220801175129-13911 -n newest-cni-20220801175129-13911
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220801175129-13911 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                    

Test skip (18/289)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 13.503992ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-484kb" [f6309c02-ade6-4111-a035-b7b3c88f8805] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009992529s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-qx6th" [3eb901bd-ece9-4f25-ad01-507d03e9655f] Running
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010117708s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220801163530-13911 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) Done: kubectl --context addons-20220801163530-13911 delete po -l run=registry-test --now: (2.974198328s)
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220801163530-13911 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220801163530-13911 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.864142333s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (17.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220801163530-13911 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220801163530-13911 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220801163530-13911 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [6d8a157c-203f-4469-854a-8366608ee1ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [6d8a157c-203f-4469-854a-8366608ee1ea] Running
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.008105044s
addons_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220801163530-13911 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.15s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220801163958-13911 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220801163958-13911 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-rnpwp" [2774c0ac-f2c8-44bb-910d-be1c95fe156a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-rnpwp" [2774c0ac-f2c8-44bb-910d-be1c95fe156a] Running
E0801 16:43:18.392198   13911 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14695-13048-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801163530-13911/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.007134694s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (15.11s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220801171037-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220801171037-13911
--- SKIP: TestNetworkPlugins/group/flannel (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220801171038-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220801171038-13911
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.61s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220801173625-13911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220801173625-13911
--- SKIP: TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                    
Copied to clipboard