Test Report: Docker_macOS 14555

                    
                      9b4ecbb2d2dd64a0f495a0351a574dab999c1b37:2022-07-25:25013
                    
                

Test fail (24/289)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.24.3/preload-exists (0.10s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.94s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220725155344-14919 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220725155344-14919 --force --alsologtostderr --driver=docker : (1.400370422s)
aaa_download_only_test.go:236: failed to read tarball file "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4": open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4: no such file or directory
aaa_download_only_test.go:246: failed to read checksum file "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4.checksum" : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4.checksum: no such file or directory
aaa_download_only_test.go:249: failed to verify checksum. checksum of "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4" does not match remote checksum ("" != "\xd4\x1d\x8cُ\x00\xb2\x04\xe9\x80\t\x98\xec\xf8B~")
helpers_test.go:175: Cleaning up "download-docker-20220725155344-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220725155344-14919
--- FAIL: TestDownloadOnlyKic (1.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (252.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220725160328-14919 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0725 16:03:54.361014   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:06:10.502285   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:06:38.201665   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:06:55.795081   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:55.801133   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:55.813424   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:55.835705   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:55.875958   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:55.956167   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:56.116644   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:56.437724   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:57.078029   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:06:58.359221   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:07:00.921537   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:07:06.043827   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:07:16.284101   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:07:36.765529   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220725160328-14919 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m12.95091726s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220725160328-14919] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-20220725160328-14919 in cluster ingress-addon-legacy-20220725160328-14919
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:03:28.158581   19071 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:03:28.158779   19071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:03:28.158784   19071 out.go:309] Setting ErrFile to fd 2...
	I0725 16:03:28.158788   19071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:03:28.158887   19071 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:03:28.159452   19071 out.go:303] Setting JSON to false
	I0725 16:03:28.174195   19071 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7132,"bootTime":1658783076,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:03:28.174273   19071 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:03:28.196044   19071 out.go:177] * [ingress-addon-legacy-20220725160328-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:03:28.217517   19071 notify.go:193] Checking for updates...
	I0725 16:03:28.239328   19071 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:03:28.261108   19071 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:03:28.282490   19071 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:03:28.304367   19071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:03:28.325399   19071 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:03:28.347694   19071 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:03:28.418197   19071 docker.go:137] docker version: linux-20.10.17
	I0725 16:03:28.418306   19071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:03:28.551659   19071 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-25 23:03:28.490389816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:03:28.595240   19071 out.go:177] * Using the docker driver based on user configuration
	I0725 16:03:28.616472   19071 start.go:284] selected driver: docker
	I0725 16:03:28.616512   19071 start.go:808] validating driver "docker" against <nil>
	I0725 16:03:28.616541   19071 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:03:28.619911   19071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:03:28.753223   19071 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-25 23:03:28.691877391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:03:28.753361   19071 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 16:03:28.753549   19071 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:03:28.775125   19071 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 16:03:28.796904   19071 cni.go:95] Creating CNI manager for ""
	I0725 16:03:28.796937   19071 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:03:28.796949   19071 start_flags.go:310] config:
	{Name:ingress-addon-legacy-20220725160328-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220725160328-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:03:28.818858   19071 out.go:177] * Starting control plane node ingress-addon-legacy-20220725160328-14919 in cluster ingress-addon-legacy-20220725160328-14919
	I0725 16:03:28.840024   19071 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:03:28.861838   19071 out.go:177] * Pulling base image ...
	I0725 16:03:28.904151   19071 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0725 16:03:28.904151   19071 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:03:28.969490   19071 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:03:28.969527   19071 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:03:28.984615   19071 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0725 16:03:28.984640   19071 cache.go:57] Caching tarball of preloaded images
	I0725 16:03:28.985034   19071 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0725 16:03:29.028452   19071 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0725 16:03:29.050483   19071 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0725 16:03:29.161146   19071 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0725 16:03:31.853845   19071 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0725 16:03:31.853990   19071 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0725 16:03:32.472693   19071 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0725 16:03:32.473012   19071 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/config.json ...
	I0725 16:03:32.473034   19071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/config.json: {Name:mkf37eb62a262ea1ce0f39083943bfe7db5cccb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:03:32.473442   19071 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:03:32.473469   19071 start.go:370] acquiring machines lock for ingress-addon-legacy-20220725160328-14919: {Name:mk070c9323563b836a08c5dbf55cac7f5c8b6a31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:03:32.473650   19071 start.go:374] acquired machines lock for "ingress-addon-legacy-20220725160328-14919" in 172.846µs
	I0725 16:03:32.473713   19071 start.go:92] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220725160328-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220725
160328-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 16:03:32.473808   19071 start.go:132] createHost starting for "" (driver="docker")
	I0725 16:03:32.515576   19071 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0725 16:03:32.515869   19071 start.go:166] libmachine.API.Create for "ingress-addon-legacy-20220725160328-14919" (driver="docker")
	I0725 16:03:32.515913   19071 client.go:168] LocalClient.Create starting
	I0725 16:03:32.516054   19071 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem
	I0725 16:03:32.516121   19071 main.go:134] libmachine: Decoding PEM data...
	I0725 16:03:32.516149   19071 main.go:134] libmachine: Parsing certificate...
	I0725 16:03:32.516232   19071 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem
	I0725 16:03:32.516282   19071 main.go:134] libmachine: Decoding PEM data...
	I0725 16:03:32.516301   19071 main.go:134] libmachine: Parsing certificate...
	I0725 16:03:32.517235   19071 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220725160328-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 16:03:32.585030   19071 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220725160328-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 16:03:32.585138   19071 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220725160328-14919] to gather additional debugging logs...
	I0725 16:03:32.585162   19071 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220725160328-14919
	W0725 16:03:32.648074   19071 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220725160328-14919 returned with exit code 1
	I0725 16:03:32.648104   19071 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220725160328-14919]: docker network inspect ingress-addon-legacy-20220725160328-14919: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220725160328-14919
	I0725 16:03:32.648120   19071 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220725160328-14919]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220725160328-14919
	
	** /stderr **
	I0725 16:03:32.648228   19071 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 16:03:32.713281   19071 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00062e460] misses:0}
	I0725 16:03:32.713321   19071 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:03:32.713338   19071 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220725160328-14919 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 16:03:32.713412   19071 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220725160328-14919 ingress-addon-legacy-20220725160328-14919
	I0725 16:03:32.808271   19071 network_create.go:99] docker network ingress-addon-legacy-20220725160328-14919 192.168.49.0/24 created
	I0725 16:03:32.808313   19071 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220725160328-14919" container
	I0725 16:03:32.808407   19071 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 16:03:32.872756   19071 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220725160328-14919 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220725160328-14919 --label created_by.minikube.sigs.k8s.io=true
	I0725 16:03:32.937260   19071 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220725160328-14919
	I0725 16:03:32.937384   19071 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220725160328-14919-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220725160328-14919 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220725160328-14919:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 16:03:33.393702   19071 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220725160328-14919
	I0725 16:03:33.393742   19071 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0725 16:03:33.393756   19071 kic.go:179] Starting extracting preloaded images to volume ...
	I0725 16:03:33.393872   19071 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220725160328-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 16:03:38.038056   19071 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220725160328-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (4.644045758s)
	I0725 16:03:38.038079   19071 kic.go:188] duration metric: took 4.644348 seconds to extract preloaded images to volume
	I0725 16:03:38.038222   19071 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 16:03:38.195597   19071 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220725160328-14919 --name ingress-addon-legacy-20220725160328-14919 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220725160328-14919 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220725160328-14919 --network ingress-addon-legacy-20220725160328-14919 --ip 192.168.49.2 --volume ingress-addon-legacy-20220725160328-14919:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 16:03:38.559676   19071 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725160328-14919 --format={{.State.Running}}
	I0725 16:03:38.631234   19071 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725160328-14919 --format={{.State.Status}}
	I0725 16:03:38.711277   19071 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220725160328-14919 stat /var/lib/dpkg/alternatives/iptables
	I0725 16:03:38.838371   19071 oci.go:144] the created container "ingress-addon-legacy-20220725160328-14919" has a running status.
	I0725 16:03:38.838400   19071 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa...
	I0725 16:03:38.929032   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0725 16:03:38.929100   19071 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 16:03:39.045381   19071 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725160328-14919 --format={{.State.Status}}
	I0725 16:03:39.113670   19071 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 16:03:39.113695   19071 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220725160328-14919 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 16:03:39.230980   19071 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725160328-14919 --format={{.State.Status}}
	I0725 16:03:39.299578   19071 machine.go:88] provisioning docker machine ...
	I0725 16:03:39.299614   19071 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220725160328-14919"
	I0725 16:03:39.299697   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:39.369829   19071 main.go:134] libmachine: Using SSH client type: native
	I0725 16:03:39.370023   19071 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57441 <nil> <nil>}
	I0725 16:03:39.370037   19071 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220725160328-14919 && echo "ingress-addon-legacy-20220725160328-14919" | sudo tee /etc/hostname
	I0725 16:03:39.503085   19071 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220725160328-14919
	
	I0725 16:03:39.503162   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:39.571826   19071 main.go:134] libmachine: Using SSH client type: native
	I0725 16:03:39.571995   19071 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57441 <nil> <nil>}
	I0725 16:03:39.572011   19071 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220725160328-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220725160328-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220725160328-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:03:39.694365   19071 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:03:39.694393   19071 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:03:39.694421   19071 ubuntu.go:177] setting up certificates
	I0725 16:03:39.694431   19071 provision.go:83] configureAuth start
	I0725 16:03:39.694512   19071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:39.763793   19071 provision.go:138] copyHostCerts
	I0725 16:03:39.763832   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:03:39.763895   19071 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:03:39.763907   19071 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:03:39.764013   19071 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:03:39.764188   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:03:39.764219   19071 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:03:39.764224   19071 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:03:39.764289   19071 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:03:39.764405   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:03:39.764439   19071 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:03:39.764444   19071 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:03:39.764499   19071 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:03:39.764619   19071 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220725160328-14919 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220725160328-14919]
	I0725 16:03:39.876634   19071 provision.go:172] copyRemoteCerts
	I0725 16:03:39.876695   19071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:03:39.876743   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:39.946208   19071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57441 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa Username:docker}
	I0725 16:03:40.035427   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 16:03:40.035518   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 16:03:40.051894   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 16:03:40.051964   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:03:40.068925   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 16:03:40.068990   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
	I0725 16:03:40.084921   19071 provision.go:86] duration metric: configureAuth took 390.47818ms
	I0725 16:03:40.084955   19071 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:03:40.085181   19071 config.go:178] Loaded profile config "ingress-addon-legacy-20220725160328-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0725 16:03:40.085271   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:40.155207   19071 main.go:134] libmachine: Using SSH client type: native
	I0725 16:03:40.155391   19071 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57441 <nil> <nil>}
	I0725 16:03:40.155407   19071 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:03:40.282759   19071 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:03:40.282772   19071 ubuntu.go:71] root file system type: overlay
	I0725 16:03:40.282915   19071 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:03:40.282985   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:40.352331   19071 main.go:134] libmachine: Using SSH client type: native
	I0725 16:03:40.352489   19071 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57441 <nil> <nil>}
	I0725 16:03:40.352539   19071 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:03:40.483530   19071 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:03:40.483629   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:40.552567   19071 main.go:134] libmachine: Using SSH client type: native
	I0725 16:03:40.552727   19071 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57441 <nil> <nil>}
	I0725 16:03:40.552740   19071 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:03:41.128519   19071 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:03:40.482808039 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 16:03:41.128541   19071 machine.go:91] provisioned docker machine in 1.828954985s
	I0725 16:03:41.128547   19071 client.go:171] LocalClient.Create took 8.612675462s
	I0725 16:03:41.128561   19071 start.go:174] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220725160328-14919" took 8.612741858s
	I0725 16:03:41.128609   19071 start.go:307] post-start starting for "ingress-addon-legacy-20220725160328-14919" (driver="docker")
	I0725 16:03:41.128614   19071 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:03:41.128687   19071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:03:41.128739   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:41.199669   19071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57441 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa Username:docker}
	I0725 16:03:41.289318   19071 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:03:41.292678   19071 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:03:41.292695   19071 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:03:41.292704   19071 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:03:41.292710   19071 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:03:41.292722   19071 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:03:41.292828   19071 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:03:41.292984   19071 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:03:41.292991   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> /etc/ssl/certs/149192.pem
	I0725 16:03:41.293157   19071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:03:41.300014   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:03:41.317406   19071 start.go:310] post-start completed in 188.790564ms
	I0725 16:03:41.317903   19071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:41.386890   19071 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/config.json ...
	I0725 16:03:41.387307   19071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:03:41.387355   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:41.456173   19071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57441 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa Username:docker}
	I0725 16:03:41.542210   19071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:03:41.546782   19071 start.go:135] duration metric: createHost completed in 9.073016852s
	I0725 16:03:41.546799   19071 start.go:82] releasing machines lock for "ingress-addon-legacy-20220725160328-14919", held for 9.073173626s
	I0725 16:03:41.546875   19071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:41.614454   19071 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:03:41.614456   19071 ssh_runner.go:195] Run: systemctl --version
	I0725 16:03:41.614530   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:41.614527   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:41.693541   19071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57441 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa Username:docker}
	I0725 16:03:41.694291   19071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57441 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa Username:docker}
	I0725 16:03:42.002419   19071 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:03:42.011734   19071 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:03:42.011793   19071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:03:42.020579   19071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:03:42.032650   19071 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:03:42.095003   19071 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:03:42.162759   19071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:03:42.227153   19071 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:03:42.424189   19071 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:03:42.460440   19071 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:03:42.541110   19071 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	I0725 16:03:42.541284   19071 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220725160328-14919 dig +short host.docker.internal
	I0725 16:03:42.675057   19071 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:03:42.675146   19071 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:03:42.680251   19071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:03:42.690068   19071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:03:42.760178   19071 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0725 16:03:42.760253   19071 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:03:42.789987   19071 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0725 16:03:42.790005   19071 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:03:42.790148   19071 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:03:42.819950   19071 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0725 16:03:42.819967   19071 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:03:42.820054   19071 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:03:42.894417   19071 cni.go:95] Creating CNI manager for ""
	I0725 16:03:42.894429   19071 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:03:42.894444   19071 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:03:42.894464   19071 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220725160328-14919 NodeName:ingress-addon-legacy-20220725160328-14919 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:sy
stemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:03:42.894613   19071 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220725160328-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:03:42.894731   19071 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220725160328-14919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220725160328-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:03:42.894824   19071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0725 16:03:42.902257   19071 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:03:42.902306   19071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:03:42.909196   19071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0725 16:03:42.921524   19071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0725 16:03:42.934180   19071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0725 16:03:42.947187   19071 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:03:42.950882   19071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:03:42.959954   19071 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919 for IP: 192.168.49.2
	I0725 16:03:42.960084   19071 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:03:42.960138   19071 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:03:42.960176   19071 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/client.key
	I0725 16:03:42.960189   19071 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/client.crt with IP's: []
	I0725 16:03:43.026725   19071 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/client.crt ...
	I0725 16:03:43.026735   19071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/client.crt: {Name:mkfd303085bf2b3cf3ab979d46bbd9b64cc5859c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:03:43.027053   19071 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/client.key ...
	I0725 16:03:43.027065   19071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/client.key: {Name:mk769c44e2e5c5c3750ca95b57e582de82bd4258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:03:43.027291   19071 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.key.dd3b5fb2
	I0725 16:03:43.027306   19071 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 16:03:43.157284   19071 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.crt.dd3b5fb2 ...
	I0725 16:03:43.157293   19071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.crt.dd3b5fb2: {Name:mke0b3c6e760cee9bf458ff37cdda5d482eb046c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:03:43.157512   19071 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.key.dd3b5fb2 ...
	I0725 16:03:43.157520   19071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.key.dd3b5fb2: {Name:mke98038b725598a565e34de03ac2f32dfb08b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:03:43.157721   19071 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.crt
	I0725 16:03:43.157875   19071 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.key
	I0725 16:03:43.158035   19071 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.key
	I0725 16:03:43.158049   19071 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.crt with IP's: []
	I0725 16:03:43.270480   19071 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.crt ...
	I0725 16:03:43.270490   19071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.crt: {Name:mkae0870ac3ea2e0eb97132c5c5598bfac2a1e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:03:43.270765   19071 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.key ...
	I0725 16:03:43.270776   19071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.key: {Name:mk14135473df0a7065f657da4c09b29453d49532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:03:43.270986   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 16:03:43.271014   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 16:03:43.271032   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 16:03:43.271050   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 16:03:43.271073   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 16:03:43.271089   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 16:03:43.271112   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 16:03:43.271130   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 16:03:43.271239   19071 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:03:43.271277   19071 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:03:43.271286   19071 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:03:43.271317   19071 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:03:43.271348   19071 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:03:43.271381   19071 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:03:43.271456   19071 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:03:43.271486   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:03:43.271503   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem -> /usr/share/ca-certificates/14919.pem
	I0725 16:03:43.271518   19071 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> /usr/share/ca-certificates/149192.pem
	I0725 16:03:43.272030   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:03:43.290550   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 16:03:43.307186   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:03:43.323923   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/ingress-addon-legacy-20220725160328-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:03:43.340437   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:03:43.357339   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:03:43.373561   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:03:43.390860   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:03:43.406985   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:03:43.424498   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:03:43.441896   19071 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:03:43.459452   19071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:03:43.472154   19071 ssh_runner.go:195] Run: openssl version
	I0725 16:03:43.477270   19071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:03:43.484927   19071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:03:43.489000   19071 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:03:43.489043   19071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:03:43.494355   19071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:03:43.501955   19071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:03:43.509747   19071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:03:43.513964   19071 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:03:43.514022   19071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:03:43.519074   19071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:03:43.526542   19071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:03:43.534452   19071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:03:43.538438   19071 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:03:43.538474   19071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:03:43.543607   19071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:03:43.551074   19071 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220725160328-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220725160328-14919 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:03:43.551168   19071 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:03:43.580198   19071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:03:43.587771   19071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:03:43.594738   19071 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:03:43.594790   19071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:03:43.601570   19071 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:03:43.601596   19071 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:03:44.349891   19071 out.go:204]   - Generating certificates and keys ...
	I0725 16:03:46.740752   19071 out.go:204]   - Booting up control plane ...
	W0725 16:05:41.658092   19071 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220725160328-14919 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220725160328-14919 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 23:03:43.651041     954 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:03:46.729425     954 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:03:46.730311     954 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220725160328-14919 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220725160328-14919 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 23:03:43.651041     954 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:03:46.729425     954 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:03:46.730311     954 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 16:05:41.658129   19071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:05:42.085855   19071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:05:42.095936   19071 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:05:42.095986   19071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:05:42.102570   19071 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:05:42.102588   19071 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:05:42.797593   19071 out.go:204]   - Generating certificates and keys ...
	I0725 16:05:43.499154   19071 out.go:204]   - Booting up control plane ...
	I0725 16:07:38.444274   19071 kubeadm.go:397] StartCluster complete in 3m54.894477051s
	I0725 16:07:38.444353   19071 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:07:38.473021   19071 logs.go:274] 0 containers: []
	W0725 16:07:38.473033   19071 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:07:38.473095   19071 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:07:38.500674   19071 logs.go:274] 0 containers: []
	W0725 16:07:38.500687   19071 logs.go:276] No container was found matching "etcd"
	I0725 16:07:38.500752   19071 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:07:38.528480   19071 logs.go:274] 0 containers: []
	W0725 16:07:38.528492   19071 logs.go:276] No container was found matching "coredns"
	I0725 16:07:38.528549   19071 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:07:38.557074   19071 logs.go:274] 0 containers: []
	W0725 16:07:38.557086   19071 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:07:38.557144   19071 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:07:38.585232   19071 logs.go:274] 0 containers: []
	W0725 16:07:38.585245   19071 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:07:38.585309   19071 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:07:38.613311   19071 logs.go:274] 0 containers: []
	W0725 16:07:38.613323   19071 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:07:38.613379   19071 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:07:38.642363   19071 logs.go:274] 0 containers: []
	W0725 16:07:38.642375   19071 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:07:38.642434   19071 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:07:38.670521   19071 logs.go:274] 0 containers: []
	W0725 16:07:38.670534   19071 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:07:38.670540   19071 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:07:38.670548   19071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:07:38.720736   19071 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:07:38.720746   19071 logs.go:123] Gathering logs for Docker ...
	I0725 16:07:38.720754   19071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:07:38.736371   19071 logs.go:123] Gathering logs for container status ...
	I0725 16:07:38.736385   19071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:07:40.790402   19071 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054015614s)
	I0725 16:07:40.790551   19071 logs.go:123] Gathering logs for kubelet ...
	I0725 16:07:40.790558   19071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:07:40.831510   19071 logs.go:123] Gathering logs for dmesg ...
	I0725 16:07:40.831525   19071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 16:07:40.843124   19071 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 23:05:42.148060    3437 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:05:43.507400    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:05:43.508442    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 16:07:40.843144   19071 out.go:239] * 
	* 
	W0725 16:07:40.843293   19071 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 23:05:42.148060    3437 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:05:43.507400    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:05:43.508442    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 23:05:42.148060    3437 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:05:43.507400    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:05:43.508442    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 16:07:40.843312   19071 out.go:239] * 
	* 
	W0725 16:07:40.843879   19071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 16:07:40.908616   19071 out.go:177] 
	W0725 16:07:40.953049   19071 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 23:05:42.148060    3437 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:05:43.507400    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:05:43.508442    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 23:05:42.148060    3437 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:05:43.507400    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:05:43.508442    3437 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 16:07:40.953198   19071 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 16:07:40.953283   19071 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 16:07:40.974400   19071 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220725160328-14919 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (252.98s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220725160328-14919 addons enable ingress --alsologtostderr -v=5
E0725 16:08:17.725697   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220725160328-14919 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.10021205s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:07:41.138671   19430 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:07:41.139150   19430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:07:41.139156   19430 out.go:309] Setting ErrFile to fd 2...
	I0725 16:07:41.139160   19430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:07:41.139262   19430 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:07:41.139846   19430 config.go:178] Loaded profile config "ingress-addon-legacy-20220725160328-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0725 16:07:41.139861   19430 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220725160328-14919"
	I0725 16:07:41.139869   19430 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220725160328-14919"
	I0725 16:07:41.140120   19430 host.go:66] Checking if "ingress-addon-legacy-20220725160328-14919" exists ...
	I0725 16:07:41.140596   19430 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725160328-14919 --format={{.State.Status}}
	I0725 16:07:41.229354   19430 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0725 16:07:41.251162   19430 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0725 16:07:41.272952   19430 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0725 16:07:41.293903   19430 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0725 16:07:41.315236   19430 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0725 16:07:41.315280   19430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0725 16:07:41.315419   19430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:07:41.383541   19430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57441 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa Username:docker}
	I0725 16:07:41.478812   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:41.528812   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:41.528833   19430 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:41.805852   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:41.857328   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:41.857343   19430 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:42.399836   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:42.454086   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:42.454104   19430 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:43.111517   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:43.164471   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:43.164491   19430 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:43.956272   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:44.008641   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:44.008661   19430 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:45.181234   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:45.235977   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:45.235994   19430 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:47.491344   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:47.544074   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:47.544094   19430 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:49.157111   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:49.209281   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:49.209296   19430 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:52.015919   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:52.068423   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:52.068438   19430 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:55.894802   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:07:55.947447   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:07:55.947462   19430 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:08:03.647172   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:08:03.698195   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:08:03.698210   19430 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:08:18.335003   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:08:18.386124   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:08:18.386140   19430 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:08:46.794929   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:08:46.845991   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:08:46.846013   19430 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:10.016481   19430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 16:09:10.067279   19430 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:10.067318   19430 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-20220725160328-14919"
	I0725 16:09:10.094070   19430 out.go:177] * Verifying ingress addon...
	I0725 16:09:10.117272   19430 out.go:177] 
	W0725 16:09:10.139125   19430 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220725160328-14919" does not exist: client config: context "ingress-addon-legacy-20220725160328-14919" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220725160328-14919" does not exist: client config: context "ingress-addon-legacy-20220725160328-14919" does not exist]
	W0725 16:09:10.139154   19430 out.go:239] * 
	* 
	W0725 16:09:10.143046   19430 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 16:09:10.164768   19430 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220725160328-14919
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220725160328-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6",
	        "Created": "2022-07-25T23:03:38.282330869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37324,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:03:38.571798669Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/hostname",
	        "HostsPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/hosts",
	        "LogPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6-json.log",
	        "Name": "/ingress-addon-legacy-20220725160328-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220725160328-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220725160328-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220725160328-14919",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220725160328-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220725160328-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725160328-14919",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725160328-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34d5cdcb3e0f0cbd1385d2e88ca67f9ef6502a05df4d21ec89ec04c283ea6cd7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57444"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57445"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/34d5cdcb3e0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220725160328-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3228bc6dd9db",
	                        "ingress-addon-legacy-20220725160328-14919"
	                    ],
	                    "NetworkID": "557dd6237e5e74270158a9691b663f7919a2b77b0b7bd2a14fcc10da3e10b2e7",
	                    "EndpointID": "28dca404f4adb615806f82c1095b1e93e3347d1f72c2e706446da4a58ab9fc64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725160328-14919 -n ingress-addon-legacy-20220725160328-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725160328-14919 -n ingress-addon-legacy-20220725160328-14919: exit status 6 (435.847773ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:09:10.685064   19531 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220725160328-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220725160328-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220725160328-14919 addons enable ingress-dns --alsologtostderr -v=5
E0725 16:09:39.645659   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220725160328-14919 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.023112548s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:09:10.743948   19541 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:09:10.744656   19541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:09:10.744666   19541 out.go:309] Setting ErrFile to fd 2...
	I0725 16:09:10.744670   19541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:09:10.744802   19541 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:09:10.745379   19541 config.go:178] Loaded profile config "ingress-addon-legacy-20220725160328-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0725 16:09:10.745394   19541 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220725160328-14919"
	I0725 16:09:10.745402   19541 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220725160328-14919"
	I0725 16:09:10.745656   19541 host.go:66] Checking if "ingress-addon-legacy-20220725160328-14919" exists ...
	I0725 16:09:10.746137   19541 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725160328-14919 --format={{.State.Status}}
	I0725 16:09:10.834660   19541 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0725 16:09:10.856700   19541 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0725 16:09:10.878523   19541 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0725 16:09:10.878568   19541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0725 16:09:10.878695   19541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725160328-14919
	I0725 16:09:10.947153   19541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57441 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/ingress-addon-legacy-20220725160328-14919/id_rsa Username:docker}
	I0725 16:09:11.038632   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:11.094489   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:11.094509   19541 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:11.372922   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:11.423827   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:11.423852   19541 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:11.966371   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:12.020607   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:12.020623   19541 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:12.677371   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:12.729644   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:12.729657   19541 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:13.522555   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:13.574253   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:13.574267   19541 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:14.745359   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:14.794385   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:14.794408   19541 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:17.047947   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:17.098491   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:17.098505   19541 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:18.709763   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:18.762871   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:18.762885   19541 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:21.567734   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:21.620520   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:21.620534   19541 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:25.447696   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:25.499217   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:25.499232   19541 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:33.198970   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:33.250921   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:33.250938   19541 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:47.888735   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:09:47.944059   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:09:47.944081   19541 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:10:16.352951   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:10:16.403880   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:10:16.403897   19541 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:10:39.574404   19541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 16:10:39.624998   19541 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 16:10:39.646981   19541 out.go:177] 
	W0725 16:10:39.669053   19541 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0725 16:10:39.669089   19541 out.go:239] * 
	* 
	W0725 16:10:39.673072   19541 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 16:10:39.694945   19541 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220725160328-14919
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220725160328-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6",
	        "Created": "2022-07-25T23:03:38.282330869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37324,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:03:38.571798669Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/hostname",
	        "HostsPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/hosts",
	        "LogPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6-json.log",
	        "Name": "/ingress-addon-legacy-20220725160328-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220725160328-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220725160328-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220725160328-14919",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220725160328-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220725160328-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725160328-14919",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725160328-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34d5cdcb3e0f0cbd1385d2e88ca67f9ef6502a05df4d21ec89ec04c283ea6cd7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57444"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57445"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/34d5cdcb3e0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220725160328-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3228bc6dd9db",
	                        "ingress-addon-legacy-20220725160328-14919"
	                    ],
	                    "NetworkID": "557dd6237e5e74270158a9691b663f7919a2b77b0b7bd2a14fcc10da3e10b2e7",
	                    "EndpointID": "28dca404f4adb615806f82c1095b1e93e3347d1f72c2e706446da4a58ab9fc64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725160328-14919 -n ingress-addon-legacy-20220725160328-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725160328-14919 -n ingress-addon-legacy-20220725160328-14919: exit status 6 (429.404032ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:10:40.208039   19641 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220725160328-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220725160328-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220725160328-14919
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220725160328-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6",
	        "Created": "2022-07-25T23:03:38.282330869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37324,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:03:38.571798669Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/hostname",
	        "HostsPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/hosts",
	        "LogPath": "/var/lib/docker/containers/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6/3228bc6dd9dbb0dad58b4d1f3443cdbae61c2b396e7a182684a542b17780e1b6-json.log",
	        "Name": "/ingress-addon-legacy-20220725160328-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220725160328-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220725160328-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3825dc6a7651345d5c8e60776dafea3e93f0b6dfa0bfd5f02423062aa781d42e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220725160328-14919",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220725160328-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220725160328-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725160328-14919",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725160328-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34d5cdcb3e0f0cbd1385d2e88ca67f9ef6502a05df4d21ec89ec04c283ea6cd7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57444"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57445"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/34d5cdcb3e0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220725160328-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3228bc6dd9db",
	                        "ingress-addon-legacy-20220725160328-14919"
	                    ],
	                    "NetworkID": "557dd6237e5e74270158a9691b663f7919a2b77b0b7bd2a14fcc10da3e10b2e7",
	                    "EndpointID": "28dca404f4adb615806f82c1095b1e93e3347d1f72c2e706446da4a58ab9fc64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725160328-14919 -n ingress-addon-legacy-20220725160328-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725160328-14919 -n ingress-addon-legacy-20220725160328-14919: exit status 6 (431.151478ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:10:40.709197   19653 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220725160328-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220725160328-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.50s)

                                                
                                    
x
+
TestPreload (264.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220725162319-14919 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0725 16:26:10.548659   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:26:55.843016   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220725162319-14919 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m21.880864074s)

                                                
                                                
-- stdout --
	* [test-preload-20220725162319-14919] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node test-preload-20220725162319-14919 in cluster test-preload-20220725162319-14919
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:23:19.608098   23417 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:23:19.608291   23417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:23:19.608297   23417 out.go:309] Setting ErrFile to fd 2...
	I0725 16:23:19.608300   23417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:23:19.608400   23417 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:23:19.608892   23417 out.go:303] Setting JSON to false
	I0725 16:23:19.624187   23417 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":8323,"bootTime":1658783076,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:23:19.624320   23417 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:23:19.646301   23417 out.go:177] * [test-preload-20220725162319-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:23:19.690077   23417 notify.go:193] Checking for updates...
	I0725 16:23:19.712286   23417 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:23:19.734260   23417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:23:19.756199   23417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:23:19.778475   23417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:23:19.800441   23417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:23:19.828695   23417 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:23:19.897452   23417 docker.go:137] docker version: linux-20.10.17
	I0725 16:23:19.897572   23417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:23:20.031193   23417 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-25 23:23:19.964504624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:23:20.074121   23417 out.go:177] * Using the docker driver based on user configuration
	I0725 16:23:20.095835   23417 start.go:284] selected driver: docker
	I0725 16:23:20.095864   23417 start.go:808] validating driver "docker" against <nil>
	I0725 16:23:20.095887   23417 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:23:20.099298   23417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:23:20.231717   23417 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-25 23:23:20.16506764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:23:20.231828   23417 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 16:23:20.231982   23417 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:23:20.253865   23417 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 16:23:20.275535   23417 cni.go:95] Creating CNI manager for ""
	I0725 16:23:20.275568   23417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:23:20.275580   23417 start_flags.go:310] config:
	{Name:test-preload-20220725162319-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220725162319-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:23:20.297430   23417 out.go:177] * Starting control plane node test-preload-20220725162319-14919 in cluster test-preload-20220725162319-14919
	I0725 16:23:20.339545   23417 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:23:20.361319   23417 out.go:177] * Pulling base image ...
	I0725 16:23:20.403673   23417 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0725 16:23:20.403683   23417 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:23:20.404941   23417 cache.go:107] acquiring lock: {Name:mk8fda3a81b59021c9135a18493bfc756ee2f248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.405476   23417 cache.go:107] acquiring lock: {Name:mk0c92beebc7b7dbe3b56318bff9b307bcb591b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.404961   23417 cache.go:107] acquiring lock: {Name:mk3d7a79970d93ee3ca26072438f5f2c4a2cef5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.407266   23417 cache.go:107] acquiring lock: {Name:mk9f553e1b5b63416f8d58259a0839e73839117d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.407363   23417 cache.go:107] acquiring lock: {Name:mkf774e69c391422fba8442051679a29fbcd5281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.407500   23417 cache.go:107] acquiring lock: {Name:mkaaee44a65b6f8183058b64b1c985be89c02bd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.407543   23417 cache.go:107] acquiring lock: {Name:mk5e03dbd3a75f65cf7a414e1c0b44ea13f4521e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.407590   23417 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/config.json ...
	I0725 16:23:20.407624   23417 cache.go:107] acquiring lock: {Name:mk3b3834dfa6c1be8dff809ee2048a1fd814712b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.407649   23417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/config.json: {Name:mk21a1d71dbb2c0c2bb13b1a8ce16fd6c3b21744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:23:20.407807   23417 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0725 16:23:20.407820   23417 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.804577ms
	I0725 16:23:20.407831   23417 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0725 16:23:20.407874   23417 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0725 16:23:20.407929   23417 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 16:23:20.408014   23417 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0725 16:23:20.408021   23417 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0725 16:23:20.408057   23417 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 16:23:20.408110   23417 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 16:23:20.408217   23417 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 16:23:20.414529   23417 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 16:23:20.416022   23417 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0725 16:23:20.416089   23417 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0725 16:23:20.416025   23417 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0725 16:23:20.416190   23417 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 16:23:20.416498   23417 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 16:23:20.417351   23417 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 16:23:20.474016   23417 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:23:20.474050   23417 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:23:20.474065   23417 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:23:20.474122   23417 start.go:370] acquiring machines lock for test-preload-20220725162319-14919: {Name:mk2236840408886522b14c07734e20ad14019e01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:23:20.474290   23417 start.go:374] acquired machines lock for "test-preload-20220725162319-14919" in 155.915µs
	I0725 16:23:20.474317   23417 start.go:92] Provisioning new machine with config: &{Name:test-preload-20220725162319-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220725162319-14919 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 16:23:20.474415   23417 start.go:132] createHost starting for "" (driver="docker")
	I0725 16:23:20.516733   23417 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0725 16:23:20.517009   23417 start.go:166] libmachine.API.Create for "test-preload-20220725162319-14919" (driver="docker")
	I0725 16:23:20.517038   23417 client.go:168] LocalClient.Create starting
	I0725 16:23:20.517100   23417 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem
	I0725 16:23:20.517136   23417 main.go:134] libmachine: Decoding PEM data...
	I0725 16:23:20.517151   23417 main.go:134] libmachine: Parsing certificate...
	I0725 16:23:20.517208   23417 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem
	I0725 16:23:20.517233   23417 main.go:134] libmachine: Decoding PEM data...
	I0725 16:23:20.517244   23417 main.go:134] libmachine: Parsing certificate...
	I0725 16:23:20.517728   23417 cli_runner.go:164] Run: docker network inspect test-preload-20220725162319-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 16:23:20.583964   23417 cli_runner.go:211] docker network inspect test-preload-20220725162319-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 16:23:20.584072   23417 network_create.go:272] running [docker network inspect test-preload-20220725162319-14919] to gather additional debugging logs...
	I0725 16:23:20.584088   23417 cli_runner.go:164] Run: docker network inspect test-preload-20220725162319-14919
	W0725 16:23:20.651423   23417 cli_runner.go:211] docker network inspect test-preload-20220725162319-14919 returned with exit code 1
	I0725 16:23:20.651450   23417 network_create.go:275] error running [docker network inspect test-preload-20220725162319-14919]: docker network inspect test-preload-20220725162319-14919: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220725162319-14919
	I0725 16:23:20.651494   23417 network_create.go:277] output of [docker network inspect test-preload-20220725162319-14919]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220725162319-14919
	
	** /stderr **
	I0725 16:23:20.651566   23417 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 16:23:20.720357   23417 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c70500] misses:0}
	I0725 16:23:20.720406   23417 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:23:20.720426   23417 network_create.go:115] attempt to create docker network test-preload-20220725162319-14919 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 16:23:20.720518   23417 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725162319-14919 test-preload-20220725162319-14919
	W0725 16:23:20.785691   23417 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725162319-14919 test-preload-20220725162319-14919 returned with exit code 1
	W0725 16:23:20.785733   23417 network_create.go:107] failed to create docker network test-preload-20220725162319-14919 192.168.49.0/24, will retry: subnet is taken
	I0725 16:23:20.785971   23417 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c70500] amended:false}} dirty:map[] misses:0}
	I0725 16:23:20.786002   23417 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:23:20.786204   23417 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c70500] amended:true}} dirty:map[192.168.49.0:0xc000c70500 192.168.58.0:0xc000c70558] misses:0}
	I0725 16:23:20.786216   23417 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:23:20.786226   23417 network_create.go:115] attempt to create docker network test-preload-20220725162319-14919 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0725 16:23:20.786286   23417 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725162319-14919 test-preload-20220725162319-14919
	W0725 16:23:20.851553   23417 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725162319-14919 test-preload-20220725162319-14919 returned with exit code 1
	W0725 16:23:20.851591   23417 network_create.go:107] failed to create docker network test-preload-20220725162319-14919 192.168.58.0/24, will retry: subnet is taken
	I0725 16:23:20.851862   23417 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c70500] amended:true}} dirty:map[192.168.49.0:0xc000c70500 192.168.58.0:0xc000c70558] misses:1}
	I0725 16:23:20.851881   23417 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:23:20.852083   23417 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c70500] amended:true}} dirty:map[192.168.49.0:0xc000c70500 192.168.58.0:0xc000c70558 192.168.67.0:0xc00000ede0] misses:1}
	I0725 16:23:20.852096   23417 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:23:20.852105   23417 network_create.go:115] attempt to create docker network test-preload-20220725162319-14919 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0725 16:23:20.852163   23417 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725162319-14919 test-preload-20220725162319-14919
	I0725 16:23:20.948030   23417 network_create.go:99] docker network test-preload-20220725162319-14919 192.168.67.0/24 created
	I0725 16:23:20.948055   23417 kic.go:106] calculated static IP "192.168.67.2" for the "test-preload-20220725162319-14919" container
	I0725 16:23:20.948128   23417 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 16:23:21.011764   23417 cli_runner.go:164] Run: docker volume create test-preload-20220725162319-14919 --label name.minikube.sigs.k8s.io=test-preload-20220725162319-14919 --label created_by.minikube.sigs.k8s.io=true
	I0725 16:23:21.076709   23417 oci.go:103] Successfully created a docker volume test-preload-20220725162319-14919
	I0725 16:23:21.076788   23417 cli_runner.go:164] Run: docker run --rm --name test-preload-20220725162319-14919-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220725162319-14919 --entrypoint /usr/bin/test -v test-preload-20220725162319-14919:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 16:23:21.226214   23417 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0725 16:23:21.305140   23417 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0725 16:23:21.317171   23417 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0725 16:23:21.322228   23417 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0725 16:23:21.356046   23417 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0725 16:23:21.356063   23417 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 948.938974ms
	I0725 16:23:21.356071   23417 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0725 16:23:21.368692   23417 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0725 16:23:21.454726   23417 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0725 16:23:21.504035   23417 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0725 16:23:21.512477   23417 oci.go:107] Successfully prepared a docker volume test-preload-20220725162319-14919
	I0725 16:23:21.512513   23417 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0725 16:23:21.512592   23417 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 16:23:21.654178   23417 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220725162319-14919 --name test-preload-20220725162319-14919 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220725162319-14919 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220725162319-14919 --network test-preload-20220725162319-14919 --ip 192.168.67.2 --volume test-preload-20220725162319-14919:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 16:23:22.053959   23417 cli_runner.go:164] Run: docker container inspect test-preload-20220725162319-14919 --format={{.State.Running}}
	I0725 16:23:22.061816   23417 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0725 16:23:22.061844   23417 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 1.654787243s
	I0725 16:23:22.061868   23417 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0725 16:23:22.129752   23417 cli_runner.go:164] Run: docker container inspect test-preload-20220725162319-14919 --format={{.State.Status}}
	I0725 16:23:22.211192   23417 cli_runner.go:164] Run: docker exec test-preload-20220725162319-14919 stat /var/lib/dpkg/alternatives/iptables
	I0725 16:23:22.351315   23417 oci.go:144] the created container "test-preload-20220725162319-14919" has a running status.
	I0725 16:23:22.351345   23417 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/test-preload-20220725162319-14919/id_rsa...
	I0725 16:23:22.470762   23417 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/test-preload-20220725162319-14919/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 16:23:22.549513   23417 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0725 16:23:22.549541   23417 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 2.142530199s
	I0725 16:23:22.549555   23417 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0725 16:23:22.595503   23417 cli_runner.go:164] Run: docker container inspect test-preload-20220725162319-14919 --format={{.State.Status}}
	I0725 16:23:22.668948   23417 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 16:23:22.668966   23417 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220725162319-14919 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 16:23:22.716408   23417 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0725 16:23:22.716440   23417 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 2.309146594s
	I0725 16:23:22.716472   23417 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0725 16:23:22.739875   23417 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0725 16:23:22.739900   23417 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 2.335906573s
	I0725 16:23:22.739915   23417 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0725 16:23:22.797315   23417 cli_runner.go:164] Run: docker container inspect test-preload-20220725162319-14919 --format={{.State.Status}}
	I0725 16:23:22.869280   23417 machine.go:88] provisioning docker machine ...
	I0725 16:23:22.869335   23417 ubuntu.go:169] provisioning hostname "test-preload-20220725162319-14919"
	I0725 16:23:22.869422   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:22.940999   23417 main.go:134] libmachine: Using SSH client type: native
	I0725 16:23:22.941186   23417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60979 <nil> <nil>}
	I0725 16:23:22.941200   23417 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220725162319-14919 && echo "test-preload-20220725162319-14919" | sudo tee /etc/hostname
	I0725 16:23:23.066419   23417 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220725162319-14919
	
	I0725 16:23:23.066511   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:23.117323   23417 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0725 16:23:23.117348   23417 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 2.709778851s
	I0725 16:23:23.117370   23417 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0725 16:23:23.137004   23417 main.go:134] libmachine: Using SSH client type: native
	I0725 16:23:23.137164   23417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60979 <nil> <nil>}
	I0725 16:23:23.137177   23417 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220725162319-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220725162319-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220725162319-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:23:23.256202   23417 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:23:23.256226   23417 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:23:23.256258   23417 ubuntu.go:177] setting up certificates
	I0725 16:23:23.256265   23417 provision.go:83] configureAuth start
	I0725 16:23:23.256338   23417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220725162319-14919
	I0725 16:23:23.325002   23417 provision.go:138] copyHostCerts
	I0725 16:23:23.325076   23417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:23:23.325085   23417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:23:23.325183   23417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:23:23.325373   23417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:23:23.325382   23417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:23:23.325440   23417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:23:23.325591   23417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:23:23.325597   23417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:23:23.325661   23417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:23:23.325797   23417 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220725162319-14919 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220725162319-14919]
	I0725 16:23:23.569215   23417 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0725 16:23:23.569236   23417 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 3.16429953s
	I0725 16:23:23.569248   23417 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0725 16:23:23.569263   23417 cache.go:87] Successfully saved all images to host disk.
	I0725 16:23:23.740635   23417 provision.go:172] copyRemoteCerts
	I0725 16:23:23.740704   23417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:23:23.740753   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:23.822813   23417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/test-preload-20220725162319-14919/id_rsa Username:docker}
	I0725 16:23:23.914109   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:23:23.930602   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0725 16:23:23.947389   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 16:23:23.964370   23417 provision.go:86] duration metric: configureAuth took 708.094031ms
	I0725 16:23:23.964384   23417 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:23:23.964530   23417 config.go:178] Loaded profile config "test-preload-20220725162319-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0725 16:23:23.964582   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:24.034623   23417 main.go:134] libmachine: Using SSH client type: native
	I0725 16:23:24.035026   23417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60979 <nil> <nil>}
	I0725 16:23:24.035040   23417 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:23:24.161726   23417 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:23:24.161739   23417 ubuntu.go:71] root file system type: overlay
	I0725 16:23:24.161902   23417 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:23:24.161976   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:24.230750   23417 main.go:134] libmachine: Using SSH client type: native
	I0725 16:23:24.231009   23417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60979 <nil> <nil>}
	I0725 16:23:24.231060   23417 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:23:24.364224   23417 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:23:24.364424   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:24.434237   23417 main.go:134] libmachine: Using SSH client type: native
	I0725 16:23:24.434390   23417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60979 <nil> <nil>}
	I0725 16:23:24.434403   23417 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:23:25.029019   23417 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:23:24.374169534 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 16:23:25.029042   23417 machine.go:91] provisioned docker machine in 2.159744768s
	I0725 16:23:25.029251   23417 client.go:171] LocalClient.Create took 4.512212787s
	I0725 16:23:25.029282   23417 start.go:174] duration metric: libmachine.API.Create for "test-preload-20220725162319-14919" took 4.51227675s
	I0725 16:23:25.029314   23417 start.go:307] post-start starting for "test-preload-20220725162319-14919" (driver="docker")
	I0725 16:23:25.029319   23417 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:23:25.029448   23417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:23:25.029548   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:25.100040   23417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/test-preload-20220725162319-14919/id_rsa Username:docker}
	I0725 16:23:25.198210   23417 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:23:25.201751   23417 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:23:25.201767   23417 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:23:25.201775   23417 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:23:25.201782   23417 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:23:25.201791   23417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:23:25.201916   23417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:23:25.202069   23417 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:23:25.202211   23417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:23:25.209203   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:23:25.226333   23417 start.go:310] post-start completed in 197.010046ms
	I0725 16:23:25.226850   23417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220725162319-14919
	I0725 16:23:25.294899   23417 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/config.json ...
	I0725 16:23:25.295454   23417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:23:25.295510   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:25.364166   23417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/test-preload-20220725162319-14919/id_rsa Username:docker}
	I0725 16:23:25.449740   23417 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:23:25.454472   23417 start.go:135] duration metric: createHost completed in 4.980053412s
	I0725 16:23:25.454487   23417 start.go:82] releasing machines lock for "test-preload-20220725162319-14919", held for 4.980192309s
	I0725 16:23:25.454549   23417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220725162319-14919
	I0725 16:23:25.522830   23417 ssh_runner.go:195] Run: systemctl --version
	I0725 16:23:25.522879   23417 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:23:25.522893   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:25.523041   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:25.598425   23417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/test-preload-20220725162319-14919/id_rsa Username:docker}
	I0725 16:23:25.598828   23417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/test-preload-20220725162319-14919/id_rsa Username:docker}
	I0725 16:23:25.906139   23417 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:23:25.915758   23417 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:23:25.915819   23417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:23:25.924958   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:23:25.936881   23417 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:23:26.004746   23417 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:23:26.073030   23417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:23:26.139277   23417 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:23:26.338657   23417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:23:26.374650   23417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:23:26.433672   23417 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	I0725 16:23:26.433876   23417 cli_runner.go:164] Run: docker exec -t test-preload-20220725162319-14919 dig +short host.docker.internal
	I0725 16:23:26.560393   23417 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:23:26.560625   23417 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:23:26.564845   23417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:23:26.574363   23417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220725162319-14919
	I0725 16:23:26.644622   23417 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0725 16:23:26.644712   23417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:23:26.674440   23417 docker.go:611] Got preloaded images: 
	I0725 16:23:26.674454   23417 docker.go:617] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0725 16:23:26.674459   23417 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 16:23:26.680870   23417 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 16:23:26.681709   23417 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 16:23:26.682144   23417 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 16:23:26.682277   23417 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0725 16:23:26.682576   23417 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0725 16:23:26.682753   23417 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 16:23:26.683512   23417 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 16:23:26.683887   23417 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0725 16:23:26.688328   23417 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 16:23:26.688423   23417 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0725 16:23:26.689467   23417 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 16:23:26.689743   23417 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0725 16:23:26.690849   23417 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 16:23:26.691094   23417 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 16:23:26.691285   23417 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 16:23:26.691539   23417 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0725 16:23:27.333518   23417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 16:23:27.363082   23417 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0725 16:23:27.363119   23417 docker.go:292] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 16:23:27.363169   23417 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 16:23:27.386942   23417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0725 16:23:27.395231   23417 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0725 16:23:27.395397   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0725 16:23:27.403436   23417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 16:23:27.420050   23417 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0725 16:23:27.420089   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0725 16:23:27.420279   23417 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0725 16:23:27.420305   23417 docker.go:292] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0725 16:23:27.420362   23417 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0725 16:23:27.450760   23417 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0725 16:23:27.450786   23417 docker.go:292] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 16:23:27.450846   23417 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 16:23:27.473950   23417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0725 16:23:27.490896   23417 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0725 16:23:27.491018   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0725 16:23:27.528187   23417 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0725 16:23:27.528331   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0725 16:23:27.538906   23417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0725 16:23:27.566701   23417 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0725 16:23:27.566726   23417 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0725 16:23:27.566740   23417 docker.go:292] Removing image: k8s.gcr.io/pause:3.1
	I0725 16:23:27.566758   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0725 16:23:27.566804   23417 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0725 16:23:27.572478   23417 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0725 16:23:27.572514   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0725 16:23:27.594411   23417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0725 16:23:27.598284   23417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 16:23:27.622158   23417 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0725 16:23:27.622192   23417 docker.go:292] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 16:23:27.622259   23417 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0725 16:23:27.641871   23417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 16:23:27.660671   23417 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0725 16:23:27.660840   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0725 16:23:27.699789   23417 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0725 16:23:27.699827   23417 docker.go:292] Removing image: k8s.gcr.io/coredns:1.6.5
	I0725 16:23:27.699904   23417 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0725 16:23:27.710960   23417 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0725 16:23:27.710989   23417 docker.go:292] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 16:23:27.711041   23417 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 16:23:27.736805   23417 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0725 16:23:27.736937   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0725 16:23:27.752546   23417 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 16:23:27.752573   23417 docker.go:292] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 16:23:27.752585   23417 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0725 16:23:27.752607   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0725 16:23:27.752646   23417 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 16:23:27.818182   23417 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0725 16:23:27.818348   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0725 16:23:27.827291   23417 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0725 16:23:27.827333   23417 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0725 16:23:27.827361   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0725 16:23:27.827457   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0725 16:23:27.868954   23417 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 16:23:27.869126   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0725 16:23:27.887811   23417 docker.go:259] Loading image: /var/lib/minikube/images/pause_3.1
	I0725 16:23:27.887830   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0725 16:23:27.894191   23417 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0725 16:23:27.894224   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0725 16:23:27.899253   23417 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0725 16:23:27.899315   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0725 16:23:27.940619   23417 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0725 16:23:27.940652   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0725 16:23:28.195393   23417 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0725 16:23:29.044733   23417 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 16:23:29.044758   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0725 16:23:29.653069   23417 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 16:23:29.653094   23417 docker.go:259] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0725 16:23:29.653103   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0725 16:23:30.551619   23417 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0725 16:23:30.622839   23417 docker.go:259] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0725 16:23:30.622854   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0725 16:23:32.861062   23417 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load": (2.238186832s)
	I0725 16:23:32.861076   23417 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0725 16:23:32.861109   23417 docker.go:259] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0725 16:23:32.861122   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0725 16:23:33.901180   23417 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load": (1.040045827s)
	I0725 16:23:33.901198   23417 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0725 16:23:33.901215   23417 docker.go:259] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0725 16:23:33.901230   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0725 16:23:35.229709   23417 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load": (1.328465298s)
	I0725 16:23:35.229724   23417 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0725 16:23:35.229745   23417 docker.go:259] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0725 16:23:35.229770   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0725 16:23:36.321972   23417 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.0921706s)
	I0725 16:23:36.321986   23417 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0725 16:23:36.322042   23417 docker.go:259] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0725 16:23:36.322053   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0725 16:23:39.378014   23417 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (3.055927228s)
	I0725 16:23:39.378030   23417 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0725 16:23:39.378092   23417 cache_images.go:123] Successfully loaded all cached images
	I0725 16:23:39.378096   23417 cache_images.go:92] LoadImages completed in 12.70364018s
	I0725 16:23:39.378267   23417 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:23:39.452868   23417 cni.go:95] Creating CNI manager for ""
	I0725 16:23:39.452880   23417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:23:39.452895   23417 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:23:39.452905   23417 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220725162319-14919 NodeName:test-preload-20220725162319-14919 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:23:39.453014   23417 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220725162319-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:23:39.453089   23417 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220725162319-14919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220725162319-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:23:39.453150   23417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0725 16:23:39.460969   23417 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0725 16:23:39.461023   23417 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0725 16:23:39.468864   23417 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0725 16:23:39.468897   23417 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0725 16:23:39.468903   23417 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0725 16:23:40.327145   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0725 16:23:40.332319   23417 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0725 16:23:40.332361   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0725 16:23:40.334980   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0725 16:23:40.383272   23417 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0725 16:23:40.383303   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0725 16:23:40.868363   23417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:23:40.933297   23417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0725 16:23:41.000050   23417 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0725 16:23:41.000090   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0725 16:23:43.477899   23417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:23:43.485101   23417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0725 16:23:43.498079   23417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:23:43.511232   23417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0725 16:23:43.524654   23417 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:23:43.528231   23417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:23:43.538399   23417 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919 for IP: 192.168.67.2
	I0725 16:23:43.538530   23417 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:23:43.538578   23417 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:23:43.538621   23417 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/client.key
	I0725 16:23:43.538634   23417 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/client.crt with IP's: []
	I0725 16:23:43.732042   23417 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/client.crt ...
	I0725 16:23:43.732056   23417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/client.crt: {Name:mk7aeefe578eb499acea2da84a1fd3e8404e7659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:23:43.732375   23417 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/client.key ...
	I0725 16:23:43.732385   23417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/client.key: {Name:mk98482500b66c9413d04b81df0d00e13e1c748d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:23:43.732607   23417 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.key.c7fa3a9e
	I0725 16:23:43.732624   23417 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 16:23:43.811033   23417 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.crt.c7fa3a9e ...
	I0725 16:23:43.811052   23417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.crt.c7fa3a9e: {Name:mk19ee3923d85e1701a09ff054ffee9eb93c2a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:23:43.811366   23417 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.key.c7fa3a9e ...
	I0725 16:23:43.811376   23417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.key.c7fa3a9e: {Name:mk7b0947877b9bec2732b1d370f5c4992b474447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:23:43.811583   23417 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.crt
	I0725 16:23:43.811759   23417 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.key
	I0725 16:23:43.811929   23417 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/proxy-client.key
	I0725 16:23:43.811945   23417 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/proxy-client.crt with IP's: []
	I0725 16:23:43.948977   23417 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/proxy-client.crt ...
	I0725 16:23:43.948986   23417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/proxy-client.crt: {Name:mk044930e86be3e69003a8ee99c14d4e8ae77b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:23:43.949213   23417 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/proxy-client.key ...
	I0725 16:23:43.949224   23417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/proxy-client.key: {Name:mkb93168494218e73a5168229b2ad8a3c2211440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:23:43.949590   23417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:23:43.949628   23417 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:23:43.949656   23417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:23:43.949689   23417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:23:43.949721   23417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:23:43.949750   23417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:23:43.949813   23417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:23:43.950302   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:23:43.978702   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 16:23:43.996231   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:23:44.013477   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/test-preload-20220725162319-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:23:44.030771   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:23:44.048029   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:23:44.065252   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:23:44.082416   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:23:44.099554   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:23:44.116664   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:23:44.133678   23417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:23:44.151866   23417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:23:44.164749   23417 ssh_runner.go:195] Run: openssl version
	I0725 16:23:44.170351   23417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:23:44.177899   23417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:23:44.181947   23417 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:23:44.181988   23417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:23:44.187260   23417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:23:44.199044   23417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:23:44.207626   23417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:23:44.211698   23417 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:23:44.211740   23417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:23:44.217034   23417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:23:44.225000   23417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:23:44.233336   23417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:23:44.237154   23417 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:23:44.237194   23417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:23:44.242205   23417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:23:44.249754   23417 kubeadm.go:395] StartCluster: {Name:test-preload-20220725162319-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220725162319-14919 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:23:44.249842   23417 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:23:44.279805   23417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:23:44.288845   23417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:23:44.295977   23417 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:23:44.296027   23417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:23:44.304562   23417 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:23:44.304588   23417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:23:44.987493   23417 out.go:204]   - Generating certificates and keys ...
	I0725 16:23:47.210008   23417 out.go:204]   - Booting up control plane ...
	W0725 16:25:42.123961   23417 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220725162319-14919 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220725162319-14919 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 23:23:44.353076    1578 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 23:23:44.353130    1578 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:23:47.194363    1578 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:23:47.195216    1578 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220725162319-14919 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220725162319-14919 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 23:23:44.353076    1578 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 23:23:44.353130    1578 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:23:47.194363    1578 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:23:47.195216    1578 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 16:25:42.123999   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:25:42.548627   23417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:25:42.558641   23417 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:25:42.558696   23417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:25:42.566493   23417 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:25:42.566511   23417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:25:43.271358   23417 out.go:204]   - Generating certificates and keys ...
	I0725 16:25:43.873230   23417 out.go:204]   - Booting up control plane ...
	I0725 16:27:38.874723   23417 kubeadm.go:397] StartCluster complete in 3m54.547866436s
	I0725 16:27:38.874818   23417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:27:38.903078   23417 logs.go:274] 0 containers: []
	W0725 16:27:38.903092   23417 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:27:38.903168   23417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:27:38.931249   23417 logs.go:274] 0 containers: []
	W0725 16:27:38.931261   23417 logs.go:276] No container was found matching "etcd"
	I0725 16:27:38.931320   23417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:27:38.959916   23417 logs.go:274] 0 containers: []
	W0725 16:27:38.959930   23417 logs.go:276] No container was found matching "coredns"
	I0725 16:27:38.959988   23417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:27:38.988431   23417 logs.go:274] 0 containers: []
	W0725 16:27:38.988443   23417 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:27:38.988502   23417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:27:39.017480   23417 logs.go:274] 0 containers: []
	W0725 16:27:39.017493   23417 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:27:39.017557   23417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:27:39.045913   23417 logs.go:274] 0 containers: []
	W0725 16:27:39.045926   23417 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:27:39.045990   23417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:27:39.075231   23417 logs.go:274] 0 containers: []
	W0725 16:27:39.075243   23417 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:27:39.075300   23417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:27:39.104929   23417 logs.go:274] 0 containers: []
	W0725 16:27:39.104943   23417 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:27:39.104949   23417 logs.go:123] Gathering logs for kubelet ...
	I0725 16:27:39.104956   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:27:39.144485   23417 logs.go:123] Gathering logs for dmesg ...
	I0725 16:27:39.144498   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:27:39.157589   23417 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:27:39.157600   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:27:39.208666   23417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:27:39.208677   23417 logs.go:123] Gathering logs for Docker ...
	I0725 16:27:39.208683   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:27:39.223426   23417 logs.go:123] Gathering logs for container status ...
	I0725 16:27:39.223439   23417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:27:41.280935   23417 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05746409s)
	W0725 16:27:41.281050   23417 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 23:25:42.613155    3867 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 23:25:42.613206    3867 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:25:43.859300    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:25:43.860632    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 16:27:41.281065   23417 out.go:239] * 
	* 
	W0725 16:27:41.281196   23417 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 23:25:42.613155    3867 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 23:25:42.613206    3867 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:25:43.859300    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:25:43.860632    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 23:25:42.613155    3867 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 23:25:42.613206    3867 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:25:43.859300    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:25:43.860632    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 16:27:41.281212   23417 out.go:239] * 
	* 
	W0725 16:27:41.281748   23417 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 16:27:41.344743   23417 out.go:177] 
	W0725 16:27:41.388741   23417 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 23:25:42.613155    3867 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 23:25:42.613206    3867 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:25:43.859300    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:25:43.860632    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 23:25:42.613155    3867 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 23:25:42.613206    3867 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 23:25:43.859300    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 23:25:43.860632    3867 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 16:27:41.388830   23417 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 16:27:41.388877   23417 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 16:27:41.431482   23417 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220725162319-14919 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-07-25 16:27:41.554644 -0700 PDT m=+2116.526823284
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220725162319-14919
helpers_test.go:235: (dbg) docker inspect test-preload-20220725162319-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a678341c48a9a04e70fa41b984d770a878e040701f82a1f266598bc8644b509",
	        "Created": "2022-07-25T23:23:21.734198887Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 106990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:23:22.052166903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/8a678341c48a9a04e70fa41b984d770a878e040701f82a1f266598bc8644b509/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a678341c48a9a04e70fa41b984d770a878e040701f82a1f266598bc8644b509/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a678341c48a9a04e70fa41b984d770a878e040701f82a1f266598bc8644b509/hosts",
	        "LogPath": "/var/lib/docker/containers/8a678341c48a9a04e70fa41b984d770a878e040701f82a1f266598bc8644b509/8a678341c48a9a04e70fa41b984d770a878e040701f82a1f266598bc8644b509-json.log",
	        "Name": "/test-preload-20220725162319-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220725162319-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220725162319-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34c06a49fc6ed7b4f354f28bc390d457008cd3288e92492536a213f8b107dbaa-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34c06a49fc6ed7b4f354f28bc390d457008cd3288e92492536a213f8b107dbaa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34c06a49fc6ed7b4f354f28bc390d457008cd3288e92492536a213f8b107dbaa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34c06a49fc6ed7b4f354f28bc390d457008cd3288e92492536a213f8b107dbaa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220725162319-14919",
	                "Source": "/var/lib/docker/volumes/test-preload-20220725162319-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220725162319-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220725162319-14919",
	                "name.minikube.sigs.k8s.io": "test-preload-20220725162319-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4784134498bdbfd0e1ad3d5364f74963c6eb10cf7effa96ecc7a95b3041307bb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60980"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60976"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60977"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60978"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4784134498bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220725162319-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8a678341c48a",
	                        "test-preload-20220725162319-14919"
	                    ],
	                    "NetworkID": "0f02b4265c52f8f267f63abe640d30aa210f8d51a7fa42dab9c4fcc2e1d6b3ef",
	                    "EndpointID": "51ea939d88d1f587992d9c5775b255cfc2273dc95bcab32c0031b76897ab190a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220725162319-14919 -n test-preload-20220725162319-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220725162319-14919 -n test-preload-20220725162319-14919: exit status 6 (427.598563ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:27:42.045492   23827 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220725162319-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220725162319-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220725162319-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220725162319-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220725162319-14919: (2.532742851s)
--- FAIL: TestPreload (264.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3852813294.exe start -p running-upgrade-20220725163251-14919 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3852813294.exe start -p running-upgrade-20220725163251-14919 --memory=2200 --vm-driver=docker : exit status 70 (53.682594816s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220725163251-14919] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1887243277
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:33:26.442107614 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220725163251-14919" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:33:43.410728173 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220725163251-14919", then "minikube start -p running-upgrade-20220725163251-14919 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.26.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 52.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 106.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 144.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 211.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 315.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 438.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 534.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:33:43.410728173 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3852813294.exe start -p running-upgrade-20220725163251-14919 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3852813294.exe start -p running-upgrade-20220725163251-14919 --memory=2200 --vm-driver=docker : exit status 70 (4.575849099s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220725163251-14919] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1060655788
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220725163251-14919" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3852813294.exe start -p running-upgrade-20220725163251-14919 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3852813294.exe start -p running-upgrade-20220725163251-14919 --memory=2200 --vm-driver=docker : exit status 70 (4.662001838s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220725163251-14919] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1667175935
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220725163251-14919" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-07-25 16:33:57.19308 -0700 PDT m=+2492.161810729
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220725163251-14919
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220725163251-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4a0064a36829da9be2b139229b276fa46c62ac3f9df95a3520a8be41624c2112",
	        "Created": "2022-07-25T23:33:34.681854661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 142060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:33:34.923286883Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/4a0064a36829da9be2b139229b276fa46c62ac3f9df95a3520a8be41624c2112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4a0064a36829da9be2b139229b276fa46c62ac3f9df95a3520a8be41624c2112/hostname",
	        "HostsPath": "/var/lib/docker/containers/4a0064a36829da9be2b139229b276fa46c62ac3f9df95a3520a8be41624c2112/hosts",
	        "LogPath": "/var/lib/docker/containers/4a0064a36829da9be2b139229b276fa46c62ac3f9df95a3520a8be41624c2112/4a0064a36829da9be2b139229b276fa46c62ac3f9df95a3520a8be41624c2112-json.log",
	        "Name": "/running-upgrade-20220725163251-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220725163251-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d1bf59f93cbb7117b452316a83aedad754e3af0d3c56bdf613a220e6831bf30a-init/diff:/var/lib/docker/overlay2/974d823892b89bae092eccadf060a8c6aed1f7fb0a6093743c321409323058b9/diff:/var/lib/docker/overlay2/ced73de6c3f1fdf98deb4630ebf2474d8f74baa5cd26fdb3d9decef060ae6f74/diff:/var/lib/docker/overlay2/c8f60c36f08254a27408c8f766a1326e9886fbd11aaa7587071af2858637f918/diff:/var/lib/docker/overlay2/3018fdda1859c2de0fd5f338b142de6d798ea38ea06617ed746551538735d335/diff:/var/lib/docker/overlay2/9946a21a7825b5cc6c2e9de80a91755fb86e38729b7a62630141715bf109ade3/diff:/var/lib/docker/overlay2/aadbee40fb42ec5693023d561580ab07ee91c1ff8fad55cd0b79c16ce3adf4f7/diff:/var/lib/docker/overlay2/9f90f677f177db8b6a6587f4e54932b32d53c84882f0548ebc1aabe213cf7d25/diff:/var/lib/docker/overlay2/5986a5e59db7cab26b1709feb2e5f832a621bb1907628146cdb24b4c29fbc5c4/diff:/var/lib/docker/overlay2/430cc152ab6e35ab72dd5ec1e43b1880a9e5a6804d878696333ca9ef2ae18114/diff:/var/lib/docker/overlay2/7bf3e9
07040cf03ff17daa64cad8b0825603e78921b6f5f9e981b8cdf71a65c4/diff:/var/lib/docker/overlay2/c66506223dac7f0cd80d3730bcdd87c1acf681ac8c34154d5b998177a17d2905/diff:/var/lib/docker/overlay2/a8ce9f864f358efb38080d249efdc38e27f7e5f080364f951a2cba55eba02bc4/diff:/var/lib/docker/overlay2/c86adef54e98a8919440d996890121f850adbc8815e87833ee6aae81a8620ca6/diff:/var/lib/docker/overlay2/8f67672e6507f0dd5cb0f415542f261d340a8a6784d327bc92210628f964503a/diff:/var/lib/docker/overlay2/6ce94ba6472679bd3bcd9c8564cd354ec35b5ccc2c7dbdd2a3d9336cdf43e6a4/diff:/var/lib/docker/overlay2/87b56923b36d8d20bb4154d81f9f8e7cb3d8aeaef5a496351341cc2320d706f3/diff:/var/lib/docker/overlay2/aacb33a6c5a16310153c98cb29a9c43978a237ddb7f33a91e3077c999185a519/diff:/var/lib/docker/overlay2/9200066cea73e4a5113439bfa175043a8b14d43b8ef508830693d9c56acabf08/diff:/var/lib/docker/overlay2/94d96ed7ad2ad6af98e5bd2e03d9f8c7f588ee9c13972ffb85190455f2a9c179/diff:/var/lib/docker/overlay2/050dff19d196127eaa7380bbf6e957d58b901e0e8713b88c51eed27d905cb323/diff:/var/lib/d
ocker/overlay2/d9c7b17075d136dd7e1bb1f6f2f1a6da63d216b07f790834109cc7fcedd1658d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1bf59f93cbb7117b452316a83aedad754e3af0d3c56bdf613a220e6831bf30a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1bf59f93cbb7117b452316a83aedad754e3af0d3c56bdf613a220e6831bf30a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1bf59f93cbb7117b452316a83aedad754e3af0d3c56bdf613a220e6831bf30a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220725163251-14919",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220725163251-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220725163251-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220725163251-14919",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220725163251-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c72c9576febb4dd255ba80320519da691570e4d573c3008639447f72154c818c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62837"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62838"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62839"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c72c9576febb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "cf86949be1e97e4bdbda32042f9b2a53a6b3ae08373d5a621d63225de98e2ec5",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "9c1877d79117e9d87cbef2330fcadbcdf13c1df54be8168e15d84918d970d7bf",
	                    "EndpointID": "cf86949be1e97e4bdbda32042f9b2a53a6b3ae08373d5a621d63225de98e2ec5",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220725163251-14919 -n running-upgrade-20220725163251-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220725163251-14919 -n running-upgrade-20220725163251-14919: exit status 6 (438.368932ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:33:57.695371   26009 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220725163251-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220725163251-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220725163251-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220725163251-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220725163251-14919: (2.471597418s)
--- FAIL: TestRunningBinaryUpgrade (69.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (322.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0725 16:35:18.923355   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:18.929816   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:18.940253   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:18.960497   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:19.001982   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:19.084262   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:19.245202   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:19.565687   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:20.205832   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:21.486048   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:24.046613   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:29.167220   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:35:39.408030   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m14.138785278s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220725163448-14919] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-20220725163448-14919 in cluster kubernetes-upgrade-20220725163448-14919
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:34:48.463499   26369 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:34:48.463663   26369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:34:48.463669   26369 out.go:309] Setting ErrFile to fd 2...
	I0725 16:34:48.463675   26369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:34:48.463795   26369 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:34:48.464315   26369 out.go:303] Setting JSON to false
	I0725 16:34:48.479600   26369 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9011,"bootTime":1658783077,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:34:48.479706   26369 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:34:48.501716   26369 out.go:177] * [kubernetes-upgrade-20220725163448-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:34:48.544890   26369 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:34:48.544842   26369 notify.go:193] Checking for updates...
	I0725 16:34:48.566850   26369 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:34:48.588738   26369 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:34:48.610769   26369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:34:48.632726   26369 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:34:48.654562   26369 config.go:178] Loaded profile config "cert-expiration-20220725163211-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:34:48.654659   26369 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:34:48.723714   26369 docker.go:137] docker version: linux-20.10.17
	I0725 16:34:48.723864   26369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:34:48.855423   26369 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:34:48.788932183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:34:48.877186   26369 out.go:177] * Using the docker driver based on user configuration
	I0725 16:34:48.898884   26369 start.go:284] selected driver: docker
	I0725 16:34:48.898930   26369 start.go:808] validating driver "docker" against <nil>
	I0725 16:34:48.898951   26369 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:34:48.901138   26369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:34:49.034057   26369 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:34:48.96768364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:34:49.034167   26369 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 16:34:49.034327   26369 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 16:34:49.056183   26369 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 16:34:49.077930   26369 cni.go:95] Creating CNI manager for ""
	I0725 16:34:49.077964   26369 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:34:49.077978   26369 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220725163448-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220725163448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:34:49.099945   26369 out.go:177] * Starting control plane node kubernetes-upgrade-20220725163448-14919 in cluster kubernetes-upgrade-20220725163448-14919
	I0725 16:34:49.141588   26369 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:34:49.162755   26369 out.go:177] * Pulling base image ...
	I0725 16:34:49.183770   26369 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:34:49.183818   26369 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:34:49.248892   26369 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:34:49.248917   26369 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:34:49.269719   26369 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 16:34:49.269756   26369 cache.go:57] Caching tarball of preloaded images
	I0725 16:34:49.270136   26369 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:34:49.313828   26369 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0725 16:34:49.334706   26369 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 16:34:49.449225   26369 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 16:34:52.631539   26369 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 16:34:52.631690   26369 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 16:34:53.180401   26369 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 16:34:53.180488   26369 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/config.json ...
	I0725 16:34:53.180511   26369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/config.json: {Name:mkdc93ea94d7e054956105b18282844549c18261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:34:53.180785   26369 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:34:53.180817   26369 start.go:370] acquiring machines lock for kubernetes-upgrade-20220725163448-14919: {Name:mk334774c1af85cfaf9247ebfdb50be9350cdeb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:34:53.180907   26369 start.go:374] acquired machines lock for "kubernetes-upgrade-20220725163448-14919" in 81.618µs
	I0725 16:34:53.180930   26369 start.go:92] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220725163448-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-2022072516344
8-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 16:34:53.180976   26369 start.go:132] createHost starting for "" (driver="docker")
	I0725 16:34:53.227030   26369 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0725 16:34:53.227263   26369 start.go:166] libmachine.API.Create for "kubernetes-upgrade-20220725163448-14919" (driver="docker")
	I0725 16:34:53.227286   26369 client.go:168] LocalClient.Create starting
	I0725 16:34:53.227372   26369 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem
	I0725 16:34:53.227406   26369 main.go:134] libmachine: Decoding PEM data...
	I0725 16:34:53.227419   26369 main.go:134] libmachine: Parsing certificate...
	I0725 16:34:53.227465   26369 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem
	I0725 16:34:53.227497   26369 main.go:134] libmachine: Decoding PEM data...
	I0725 16:34:53.227507   26369 main.go:134] libmachine: Parsing certificate...
	I0725 16:34:53.227897   26369 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220725163448-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 16:34:53.292825   26369 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220725163448-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 16:34:53.292922   26369 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220725163448-14919] to gather additional debugging logs...
	I0725 16:34:53.292942   26369 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220725163448-14919
	W0725 16:34:53.357322   26369 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220725163448-14919 returned with exit code 1
	I0725 16:34:53.357365   26369 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220725163448-14919]: docker network inspect kubernetes-upgrade-20220725163448-14919: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220725163448-14919
	I0725 16:34:53.357401   26369 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220725163448-14919]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220725163448-14919
	
	** /stderr **
	I0725 16:34:53.357485   26369 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 16:34:53.422844   26369 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000648200] misses:0}
	I0725 16:34:53.422884   26369 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:34:53.422901   26369 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220725163448-14919 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 16:34:53.422972   26369 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 kubernetes-upgrade-20220725163448-14919
	W0725 16:34:53.493500   26369 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 kubernetes-upgrade-20220725163448-14919 returned with exit code 1
	W0725 16:34:53.493536   26369 network_create.go:107] failed to create docker network kubernetes-upgrade-20220725163448-14919 192.168.49.0/24, will retry: subnet is taken
	I0725 16:34:53.493800   26369 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000648200] amended:false}} dirty:map[] misses:0}
	I0725 16:34:53.493816   26369 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:34:53.494068   26369 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000648200] amended:true}} dirty:map[192.168.49.0:0xc000648200 192.168.58.0:0xc0003e4098] misses:0}
	I0725 16:34:53.494085   26369 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:34:53.494098   26369 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220725163448-14919 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0725 16:34:53.494163   26369 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 kubernetes-upgrade-20220725163448-14919
	W0725 16:34:53.557359   26369 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 kubernetes-upgrade-20220725163448-14919 returned with exit code 1
	W0725 16:34:53.557409   26369 network_create.go:107] failed to create docker network kubernetes-upgrade-20220725163448-14919 192.168.58.0/24, will retry: subnet is taken
	I0725 16:34:53.557699   26369 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000648200] amended:true}} dirty:map[192.168.49.0:0xc000648200 192.168.58.0:0xc0003e4098] misses:1}
	I0725 16:34:53.557716   26369 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:34:53.557937   26369 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000648200] amended:true}} dirty:map[192.168.49.0:0xc000648200 192.168.58.0:0xc0003e4098 192.168.67.0:0xc000648238] misses:1}
	I0725 16:34:53.557953   26369 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:34:53.557963   26369 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220725163448-14919 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0725 16:34:53.558030   26369 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 kubernetes-upgrade-20220725163448-14919
	W0725 16:34:53.621803   26369 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 kubernetes-upgrade-20220725163448-14919 returned with exit code 1
	W0725 16:34:53.621846   26369 network_create.go:107] failed to create docker network kubernetes-upgrade-20220725163448-14919 192.168.67.0/24, will retry: subnet is taken
	I0725 16:34:53.622127   26369 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000648200] amended:true}} dirty:map[192.168.49.0:0xc000648200 192.168.58.0:0xc0003e4098 192.168.67.0:0xc000648238] misses:2}
	I0725 16:34:53.622145   26369 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:34:53.622340   26369 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000648200] amended:true}} dirty:map[192.168.49.0:0xc000648200 192.168.58.0:0xc0003e4098 192.168.67.0:0xc000648238 192.168.76.0:0xc0003e40d8] misses:2}
	I0725 16:34:53.622351   26369 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:34:53.622358   26369 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220725163448-14919 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0725 16:34:53.622428   26369 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 kubernetes-upgrade-20220725163448-14919
	I0725 16:34:53.728789   26369 network_create.go:99] docker network kubernetes-upgrade-20220725163448-14919 192.168.76.0/24 created
	I0725 16:34:53.728848   26369 kic.go:106] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-20220725163448-14919" container
	I0725 16:34:53.728948   26369 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 16:34:53.797149   26369 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220725163448-14919 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 --label created_by.minikube.sigs.k8s.io=true
	I0725 16:34:53.861896   26369 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220725163448-14919
	I0725 16:34:53.862054   26369 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220725163448-14919-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220725163448-14919:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 16:34:54.316728   26369 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220725163448-14919
	I0725 16:34:54.316873   26369 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:34:54.316901   26369 kic.go:179] Starting extracting preloaded images to volume ...
	I0725 16:34:54.317005   26369 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220725163448-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 16:34:58.200314   26369 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220725163448-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (3.883081592s)
	I0725 16:34:58.200339   26369 kic.go:188] duration metric: took 3.883413 seconds to extract preloaded images to volume
	I0725 16:34:58.200460   26369 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 16:34:58.336931   26369 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220725163448-14919 --name kubernetes-upgrade-20220725163448-14919 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220725163448-14919 --network kubernetes-upgrade-20220725163448-14919 --ip 192.168.76.2 --volume kubernetes-upgrade-20220725163448-14919:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 16:34:58.743852   26369 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Running}}
	I0725 16:34:58.819079   26369 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Status}}
	I0725 16:34:58.897353   26369 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220725163448-14919 stat /var/lib/dpkg/alternatives/iptables
	I0725 16:34:59.030869   26369 oci.go:144] the created container "kubernetes-upgrade-20220725163448-14919" has a running status.
	I0725 16:34:59.030903   26369 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa...
	I0725 16:34:59.087625   26369 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 16:34:59.213399   26369 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Status}}
	I0725 16:34:59.289698   26369 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 16:34:59.289717   26369 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220725163448-14919 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 16:34:59.428644   26369 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Status}}
	I0725 16:34:59.501456   26369 machine.go:88] provisioning docker machine ...
	I0725 16:34:59.503682   26369 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220725163448-14919"
	I0725 16:34:59.503794   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:34:59.576564   26369 main.go:134] libmachine: Using SSH client type: native
	I0725 16:34:59.576777   26369 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63329 <nil> <nil>}
	I0725 16:34:59.576791   26369 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220725163448-14919 && echo "kubernetes-upgrade-20220725163448-14919" | sudo tee /etc/hostname
	I0725 16:34:59.711666   26369 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220725163448-14919
	
	I0725 16:34:59.711749   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:34:59.783227   26369 main.go:134] libmachine: Using SSH client type: native
	I0725 16:34:59.783385   26369 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63329 <nil> <nil>}
	I0725 16:34:59.783418   26369 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220725163448-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220725163448-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220725163448-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:34:59.906925   26369 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:34:59.906946   26369 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:34:59.906968   26369 ubuntu.go:177] setting up certificates
	I0725 16:34:59.906979   26369 provision.go:83] configureAuth start
	I0725 16:34:59.907062   26369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725163448-14919
	I0725 16:34:59.979531   26369 provision.go:138] copyHostCerts
	I0725 16:34:59.981714   26369 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:34:59.981723   26369 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:34:59.981828   26369 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:34:59.982030   26369 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:34:59.982038   26369 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:34:59.982104   26369 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:34:59.982241   26369 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:34:59.982247   26369 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:34:59.982303   26369 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:34:59.982411   26369 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220725163448-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220725163448-14919]
	I0725 16:35:00.036098   26369 provision.go:172] copyRemoteCerts
	I0725 16:35:00.036161   26369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:35:00.036208   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:00.112222   26369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63329 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:35:00.201902   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:35:00.218932   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0725 16:35:00.236366   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 16:35:00.253805   26369 provision.go:86] duration metric: configureAuth took 346.802603ms
	I0725 16:35:00.253833   26369 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:35:00.254109   26369 config.go:178] Loaded profile config "kubernetes-upgrade-20220725163448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:35:00.254232   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:00.328162   26369 main.go:134] libmachine: Using SSH client type: native
	I0725 16:35:00.328338   26369 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63329 <nil> <nil>}
	I0725 16:35:00.328352   26369 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:35:00.449827   26369 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:35:00.449842   26369 ubuntu.go:71] root file system type: overlay
	I0725 16:35:00.449978   26369 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:35:00.450055   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:00.525083   26369 main.go:134] libmachine: Using SSH client type: native
	I0725 16:35:00.525249   26369 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63329 <nil> <nil>}
	I0725 16:35:00.525298   26369 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:35:00.653736   26369 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:35:00.653814   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:00.728983   26369 main.go:134] libmachine: Using SSH client type: native
	I0725 16:35:00.729162   26369 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 63329 <nil> <nil>}
	I0725 16:35:00.729176   26369 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:35:01.327859   26369 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:35:00.667418760 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 16:35:01.327883   26369 machine.go:91] provisioned docker machine in 1.82639068s
	I0725 16:35:01.327889   26369 client.go:171] LocalClient.Create took 8.100524242s
	I0725 16:35:01.327908   26369 start.go:174] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220725163448-14919" took 8.100570849s
	I0725 16:35:01.327920   26369 start.go:307] post-start starting for "kubernetes-upgrade-20220725163448-14919" (driver="docker")
	I0725 16:35:01.327925   26369 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:35:01.327993   26369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:35:01.328064   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:01.406498   26369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63329 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:35:01.497881   26369 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:35:01.501447   26369 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:35:01.501463   26369 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:35:01.501470   26369 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:35:01.501477   26369 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:35:01.501486   26369 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:35:01.501590   26369 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:35:01.501732   26369 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:35:01.501884   26369 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:35:01.510921   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:35:01.529573   26369 start.go:310] post-start completed in 201.642207ms
	I0725 16:35:01.530288   26369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:01.602490   26369 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/config.json ...
	I0725 16:35:01.602890   26369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:35:01.602943   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:01.678510   26369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63329 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:35:01.768665   26369 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:35:01.773731   26369 start.go:135] duration metric: createHost completed in 8.592663349s
	I0725 16:35:01.773751   26369 start.go:82] releasing machines lock for "kubernetes-upgrade-20220725163448-14919", held for 8.59275798s
	I0725 16:35:01.773842   26369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:01.847076   26369 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:35:01.847075   26369 ssh_runner.go:195] Run: systemctl --version
	I0725 16:35:01.847154   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:01.847160   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:01.928771   26369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63329 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:35:01.930497   26369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63329 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:35:02.241291   26369 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:35:02.264768   26369 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:35:02.264858   26369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:35:02.274934   26369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:35:02.289065   26369 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:35:02.361562   26369 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:35:02.447666   26369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:35:02.514133   26369 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:35:02.723441   26369 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:35:02.765564   26369 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:35:02.846302   26369 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 16:35:02.846556   26369 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220725163448-14919 dig +short host.docker.internal
	I0725 16:35:02.987429   26369 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:35:02.987641   26369 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:35:02.991989   26369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:35:03.001414   26369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:35:03.073708   26369 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:35:03.073771   26369 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:35:03.102765   26369 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:35:03.102781   26369 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:35:03.102853   26369 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:35:03.134383   26369 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:35:03.134419   26369 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:35:03.134494   26369 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:35:03.209243   26369 cni.go:95] Creating CNI manager for ""
	I0725 16:35:03.209256   26369 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:35:03.209268   26369 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:35:03.209297   26369 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220725163448-14919 NodeName:kubernetes-upgrade-20220725163448-14919 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:35:03.209429   26369 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220725163448-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220725163448-14919
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:35:03.209508   26369 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220725163448-14919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220725163448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:35:03.209567   26369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 16:35:03.217336   26369 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:35:03.217388   26369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:35:03.224655   26369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0725 16:35:03.237515   26369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:35:03.250242   26369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0725 16:35:03.262772   26369 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:35:03.266653   26369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:35:03.276405   26369 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919 for IP: 192.168.76.2
	I0725 16:35:03.276509   26369 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:35:03.276556   26369 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:35:03.276603   26369 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.key
	I0725 16:35:03.276617   26369 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.crt with IP's: []
	I0725 16:35:03.428408   26369 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.crt ...
	I0725 16:35:03.428424   26369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.crt: {Name:mkfc5f74d4fc7d69eefd0dac05c116228950aa38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:35:03.428756   26369 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.key ...
	I0725 16:35:03.428773   26369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.key: {Name:mkdb55a263a74183935f3831863271dd8d05ea85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:35:03.428973   26369 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.key.31bdca25
	I0725 16:35:03.428990   26369 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 16:35:03.523118   26369 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.crt.31bdca25 ...
	I0725 16:35:03.523136   26369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.crt.31bdca25: {Name:mk26512b253c57428da3b42d6789d30de71a767a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:35:03.523421   26369 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.key.31bdca25 ...
	I0725 16:35:03.523437   26369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.key.31bdca25: {Name:mk3869a7de245e43d6b2220235dc267e4da22c53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:35:03.523645   26369 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.crt
	I0725 16:35:03.523797   26369 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.key
	I0725 16:35:03.523945   26369 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.key
	I0725 16:35:03.523961   26369 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.crt with IP's: []
	I0725 16:35:03.904364   26369 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.crt ...
	I0725 16:35:03.904379   26369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.crt: {Name:mk84e5ae292316e6cf485a913eac37772dacbeed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:35:03.904667   26369 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.key ...
	I0725 16:35:03.904675   26369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.key: {Name:mk9dae7859549a78d6ace17474bbb9f3fc5f2e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:35:03.905066   26369 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:35:03.905112   26369 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:35:03.905147   26369 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:35:03.905188   26369 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:35:03.905221   26369 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:35:03.905254   26369 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:35:03.905380   26369 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:35:03.905868   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:35:03.924267   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 16:35:03.941372   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:35:03.958578   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 16:35:03.975433   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:35:03.992804   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:35:04.009492   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:35:04.026405   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:35:04.043833   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:35:04.060926   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:35:04.078034   26369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:35:04.094916   26369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:35:04.107622   26369 ssh_runner.go:195] Run: openssl version
	I0725 16:35:04.113239   26369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:35:04.121410   26369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:35:04.125246   26369 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:35:04.125286   26369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:35:04.130749   26369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:35:04.139412   26369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:35:04.147663   26369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:35:04.151710   26369 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:35:04.151757   26369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:35:04.156988   26369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:35:04.164911   26369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:35:04.172494   26369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:35:04.176521   26369 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:35:04.176558   26369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:35:04.181826   26369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:35:04.189313   26369 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220725163448-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220725163448-14919 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
}
	I0725 16:35:04.189405   26369 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:35:04.218230   26369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:35:04.225991   26369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:35:04.233361   26369 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:35:04.233441   26369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:35:04.240576   26369 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:35:04.240621   26369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:37:02.716578   26369 out.go:204]   - Generating certificates and keys ...
	I0725 16:37:02.758956   26369 out.go:204]   - Booting up control plane ...
	W0725 16:37:02.762866   26369 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220725163448-14919 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220725163448-14919 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220725163448-14919 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220725163448-14919 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 16:37:02.762907   26369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:37:03.196748   26369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:37:03.206789   26369 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:37:03.206861   26369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:37:03.214555   26369 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:37:03.214585   26369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:37:04.000557   26369 out.go:204]   - Generating certificates and keys ...
	I0725 16:37:04.988542   26369 out.go:204]   - Booting up control plane ...
	I0725 16:38:59.906710   26369 kubeadm.go:397] StartCluster complete in 3m55.71524022s
	I0725 16:38:59.906788   26369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:38:59.934918   26369 logs.go:274] 0 containers: []
	W0725 16:38:59.934933   26369 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:38:59.934999   26369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:38:59.964070   26369 logs.go:274] 0 containers: []
	W0725 16:38:59.964081   26369 logs.go:276] No container was found matching "etcd"
	I0725 16:38:59.964143   26369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:38:59.992229   26369 logs.go:274] 0 containers: []
	W0725 16:38:59.992242   26369 logs.go:276] No container was found matching "coredns"
	I0725 16:38:59.992298   26369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:39:00.022676   26369 logs.go:274] 0 containers: []
	W0725 16:39:00.022688   26369 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:39:00.022797   26369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:39:00.052612   26369 logs.go:274] 0 containers: []
	W0725 16:39:00.052625   26369 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:39:00.052691   26369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:39:00.083312   26369 logs.go:274] 0 containers: []
	W0725 16:39:00.083325   26369 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:39:00.083388   26369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:39:00.115276   26369 logs.go:274] 0 containers: []
	W0725 16:39:00.115289   26369 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:39:00.115352   26369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:39:00.144049   26369 logs.go:274] 0 containers: []
	W0725 16:39:00.144061   26369 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:39:00.144068   26369 logs.go:123] Gathering logs for kubelet ...
	I0725 16:39:00.144079   26369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:39:00.185235   26369 logs.go:123] Gathering logs for dmesg ...
	I0725 16:39:00.185249   26369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:39:00.199092   26369 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:39:00.199105   26369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:39:00.250374   26369 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:39:00.250386   26369 logs.go:123] Gathering logs for Docker ...
	I0725 16:39:00.250392   26369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:39:00.265738   26369 logs.go:123] Gathering logs for container status ...
	I0725 16:39:00.265750   26369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:39:02.322573   26369 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056791602s)
	W0725 16:39:02.322715   26369 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 16:39:02.322730   26369 out.go:239] * 
	* 
	W0725 16:39:02.322859   26369 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 16:39:02.322875   26369 out.go:239] * 
	* 
	W0725 16:39:02.323440   26369 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 16:39:02.388362   26369 out.go:177] 
	W0725 16:39:02.432425   26369 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 16:39:02.432607   26369 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 16:39:02.432699   26369 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 16:39:02.491151   26369 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220725163448-14919
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220725163448-14919: (1.644829344s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220725163448-14919 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220725163448-14919 status --format={{.Host}}: exit status 7 (117.852008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker : (34.236365744s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220725163448-14919 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (372.28225ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220725163448-14919] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.24.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220725163448-14919
	    minikube start -p kubernetes-upgrade-20220725163448-14919 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220725163448-149192 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.24.3, by running:
	    
	    minikube start -p kubernetes-upgrade-20220725163448-14919 --kubernetes-version=v1.24.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725163448-14919 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker : (22.752898417s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-07-25 16:40:01.788317 -0700 PDT m=+2856.753705328
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220725163448-14919
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220725163448-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aff3108fdfe0302be1c74b2273649191d4dfee91bf861c3464d6216bf53b912d",
	        "Created": "2022-07-25T23:34:58.417024714Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 163121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:39:05.684709599Z",
	            "FinishedAt": "2022-07-25T23:39:03.086488458Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/aff3108fdfe0302be1c74b2273649191d4dfee91bf861c3464d6216bf53b912d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aff3108fdfe0302be1c74b2273649191d4dfee91bf861c3464d6216bf53b912d/hostname",
	        "HostsPath": "/var/lib/docker/containers/aff3108fdfe0302be1c74b2273649191d4dfee91bf861c3464d6216bf53b912d/hosts",
	        "LogPath": "/var/lib/docker/containers/aff3108fdfe0302be1c74b2273649191d4dfee91bf861c3464d6216bf53b912d/aff3108fdfe0302be1c74b2273649191d4dfee91bf861c3464d6216bf53b912d-json.log",
	        "Name": "/kubernetes-upgrade-20220725163448-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220725163448-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220725163448-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3aca2ad0cb87bb13f9b7bf4d5036529ce48fa5041dd940886e843d9f28d1c419-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3aca2ad0cb87bb13f9b7bf4d5036529ce48fa5041dd940886e843d9f28d1c419/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3aca2ad0cb87bb13f9b7bf4d5036529ce48fa5041dd940886e843d9f28d1c419/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3aca2ad0cb87bb13f9b7bf4d5036529ce48fa5041dd940886e843d9f28d1c419/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220725163448-14919",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220725163448-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220725163448-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220725163448-14919",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220725163448-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "88eea5e03c628b5880217f8b9e270abf82c4f0842ee5a30b11b9502a43de4c8a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64036"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64037"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64038"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64039"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64040"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/88eea5e03c62",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220725163448-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aff3108fdfe0",
	                        "kubernetes-upgrade-20220725163448-14919"
	                    ],
	                    "NetworkID": "812a0d6d94807be61bf6324b83ad58c9692cc3311ff205efd6076f5405909b28",
	                    "EndpointID": "efe77dd535588909e02797d1cf0f9480e045812b731d434517364d93d1ccbe6f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220725163448-14919 -n kubernetes-upgrade-20220725163448-14919
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220725163448-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220725163448-14919 logs -n 25: (2.804231612s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                      | force-systemd-flag-20220725163137-14919 | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | force-systemd-flag-20220725163137-14919 |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220725163211-14919    | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | cert-expiration-20220725163211-14919    |                                         |         |         |                     |                     |
	|         | --memory=2048 --cert-expiration=3m      |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220725163143-14919       | docker-flags-20220725163143-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=Environment --no-pager       |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220725163143-14919       | docker-flags-20220725163143-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=ExecStart --no-pager         |                                         |         |         |                     |                     |
	| delete  | -p                                      | docker-flags-20220725163143-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | docker-flags-20220725163143-14919       |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-options-20220725163217-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | cert-options-20220725163217-14919       |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |         |                     |                     |
	|         | --apiserver-names=localhost             |                                         |         |         |                     |                     |
	|         | --apiserver-names=www.google.com        |                                         |         |         |                     |                     |
	|         | --apiserver-port=8555                   |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|         | --apiserver-name=localhost              |                                         |         |         |                     |                     |
	| ssh     | cert-options-20220725163217-14919       | cert-options-20220725163217-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |         |                     |                     |
	| ssh     | -p                                      | cert-options-20220725163217-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | cert-options-20220725163217-14919       |                                         |         |         |                     |                     |
	|         | -- sudo cat                             |                                         |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-options-20220725163217-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | cert-options-20220725163217-14919       |                                         |         |         |                     |                     |
	| delete  | -p                                      | running-upgrade-20220725163251-14919    | jenkins | v1.26.0 | 25 Jul 22 16:33 PDT | 25 Jul 22 16:34 PDT |
	|         | running-upgrade-20220725163251-14919    |                                         |         |         |                     |                     |
	| delete  | -p                                      | missing-upgrade-20220725163400-14919    | jenkins | v1.26.0 | 25 Jul 22 16:34 PDT | 25 Jul 22 16:34 PDT |
	|         | missing-upgrade-20220725163400-14919    |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725163448-14919 | jenkins | v1.26.0 | 25 Jul 22 16:34 PDT |                     |
	|         | kubernetes-upgrade-20220725163448-14919 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220725163211-14919    | jenkins | v1.26.0 | 25 Jul 22 16:35 PDT | 25 Jul 22 16:36 PDT |
	|         | cert-expiration-20220725163211-14919    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-expiration-20220725163211-14919    | jenkins | v1.26.0 | 25 Jul 22 16:36 PDT | 25 Jul 22 16:36 PDT |
	|         | cert-expiration-20220725163211-14919    |                                         |         |         |                     |                     |
	| delete  | -p                                      | stopped-upgrade-20220725163620-14919    | jenkins | v1.26.0 | 25 Jul 22 16:37 PDT | 25 Jul 22 16:37 PDT |
	|         | stopped-upgrade-20220725163620-14919    |                                         |         |         |                     |                     |
	| start   | -p pause-20220725163713-14919           | pause-20220725163713-14919              | jenkins | v1.26.0 | 25 Jul 22 16:37 PDT | 25 Jul 22 16:37 PDT |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=all --driver=docker              |                                         |         |         |                     |                     |
	| start   | -p pause-20220725163713-14919           | pause-20220725163713-14919              | jenkins | v1.26.0 | 25 Jul 22 16:37 PDT | 25 Jul 22 16:38 PDT |
	|         | --alsologtostderr -v=1                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| pause   | -p pause-20220725163713-14919           | pause-20220725163713-14919              | jenkins | v1.26.0 | 25 Jul 22 16:38 PDT | 25 Jul 22 16:38 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| stop    | -p                                      | kubernetes-upgrade-20220725163448-14919 | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT | 25 Jul 22 16:39 PDT |
	|         | kubernetes-upgrade-20220725163448-14919 |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725163448-14919 | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT | 25 Jul 22 16:39 PDT |
	|         | kubernetes-upgrade-20220725163448-14919 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725163448-14919 | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT |                     |
	|         | kubernetes-upgrade-20220725163448-14919 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725163448-14919 | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT | 25 Jul 22 16:40 PDT |
	|         | kubernetes-upgrade-20220725163448-14919 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| delete  | -p pause-20220725163713-14919           | pause-20220725163713-14919              | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT | 25 Jul 22 16:39 PDT |
	| start   | -p                                      | NoKubernetes-20220725163945-14919       | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT |                     |
	|         | NoKubernetes-20220725163945-14919       |                                         |         |         |                     |                     |
	|         | --no-kubernetes                         |                                         |         |         |                     |                     |
	|         | --kubernetes-version=1.20               |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220725163945-14919       | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT |                     |
	|         | NoKubernetes-20220725163945-14919       |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 16:39:46
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 16:39:46.621548   27571 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:39:46.621759   27571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:39:46.621761   27571 out.go:309] Setting ErrFile to fd 2...
	I0725 16:39:46.621764   27571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:39:46.621876   27571 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:39:46.622330   27571 out.go:303] Setting JSON to false
	I0725 16:39:46.637592   27571 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9309,"bootTime":1658783077,"procs":365,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:39:46.637681   27571 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:39:46.661222   27571 out.go:177] * [NoKubernetes-20220725163945-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:39:46.682464   27571 notify.go:193] Checking for updates...
	I0725 16:39:46.703245   27571 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:39:46.724140   27571 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:39:46.745640   27571 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:39:46.767340   27571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:39:46.809211   27571 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:39:46.831484   27571 config.go:178] Loaded profile config "kubernetes-upgrade-20220725163448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:39:46.831573   27571 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:39:46.912350   27571 docker.go:137] docker version: linux-20.10.17
	I0725 16:39:46.912481   27571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:39:47.059662   27571 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:39:46.982598801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:39:47.100922   27571 out.go:177] * Using the docker driver based on user configuration
	I0725 16:39:47.122072   27571 start.go:284] selected driver: docker
	I0725 16:39:47.122108   27571 start.go:808] validating driver "docker" against <nil>
	I0725 16:39:47.122129   27571 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:39:47.122307   27571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:39:47.272480   27571 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:39:47.195021868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:39:47.272601   27571 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 16:39:47.275145   27571 start_flags.go:377] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0725 16:39:47.275254   27571 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 16:39:47.297008   27571 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 16:39:47.317583   27571 cni.go:95] Creating CNI manager for ""
	I0725 16:39:47.317603   27571 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:39:47.317651   27571 start_flags.go:310] config:
	{Name:NoKubernetes-20220725163945-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:NoKubernetes-20220725163945-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:39:47.339949   27571 out.go:177] * Starting control plane node NoKubernetes-20220725163945-14919 in cluster NoKubernetes-20220725163945-14919
	I0725 16:39:47.361678   27571 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:39:47.382667   27571 out.go:177] * Pulling base image ...
	I0725 16:39:47.424845   27571 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:39:47.424880   27571 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:39:47.424919   27571 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 16:39:47.424944   27571 cache.go:57] Caching tarball of preloaded images
	I0725 16:39:47.425173   27571 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:39:47.425188   27571 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 16:39:47.426208   27571 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/config.json ...
	I0725 16:39:47.426300   27571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/config.json: {Name:mk68d28ff23ceaab7463ddb926350cac7896fee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:39:47.497280   27571 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:39:47.497296   27571 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:39:47.497307   27571 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:39:47.497379   27571 start.go:370] acquiring machines lock for NoKubernetes-20220725163945-14919: {Name:mkde3fd15180eb2d58d838a7ba503af25c4d3cc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:39:47.497543   27571 start.go:374] acquired machines lock for "NoKubernetes-20220725163945-14919" in 153.724µs
	I0725 16:39:47.497571   27571 start.go:92] Provisioning new machine with config: &{Name:NoKubernetes-20220725163945-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:NoKubernetes-20220725163945-14919 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
&{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 16:39:47.497623   27571 start.go:132] createHost starting for "" (driver="docker")
	I0725 16:39:45.712449   27481 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 16:39:45.712561   27481 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220725163448-14919 dig +short host.docker.internal
	I0725 16:39:46.267340   27481 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:39:46.267579   27481 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:39:46.272486   27481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:46.526505   27481 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:39:46.526612   27481 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:39:46.566576   27481 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0725 16:39:46.566602   27481 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:39:46.566716   27481 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:39:46.664011   27481 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0725 16:39:46.664034   27481 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:39:46.664112   27481 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:39:46.997674   27481 cni.go:95] Creating CNI manager for ""
	I0725 16:39:46.997691   27481 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:39:46.997713   27481 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:39:46.997736   27481 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220725163448-14919 NodeName:kubernetes-upgrade-20220725163448-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:system
d ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:39:46.997925   27481 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-20220725163448-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:39:46.998101   27481 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-20220725163448-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220725163448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:39:46.998185   27481 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 16:39:47.062804   27481 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:39:47.062874   27481 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:39:47.071422   27481 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (501 bytes)
	I0725 16:39:47.084902   27481 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:39:47.098872   27481 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0725 16:39:47.166079   27481 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:39:47.171927   27481 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919 for IP: 192.168.76.2
	I0725 16:39:47.172094   27481 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:39:47.172173   27481 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:39:47.172303   27481 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.key
	I0725 16:39:47.172380   27481 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.key.31bdca25
	I0725 16:39:47.172445   27481 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.key
	I0725 16:39:47.172739   27481 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:39:47.172781   27481 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:39:47.172793   27481 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:39:47.172834   27481 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:39:47.172870   27481 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:39:47.172930   27481 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:39:47.173043   27481 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:39:47.173727   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:39:47.198545   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 16:39:47.266072   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:39:47.286125   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 16:39:47.305242   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:39:47.379523   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:39:47.398666   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:39:47.475415   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:39:47.497904   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:39:47.565690   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:39:47.587566   27481 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:39:47.609149   27481 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:39:47.668022   27481 ssh_runner.go:195] Run: openssl version
	I0725 16:39:47.674926   27481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:39:47.689467   27481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:39:47.694940   27481 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:39:47.694988   27481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:39:47.701958   27481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:39:47.712254   27481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:39:47.721947   27481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:39:47.727500   27481 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:39:47.727547   27481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:39:47.760891   27481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:39:47.771588   27481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:39:47.781379   27481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:39:47.785919   27481 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:39:47.785979   27481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:39:47.792576   27481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:39:47.803964   27481 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220725163448-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220725163448-14919 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:39:47.804090   27481 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:39:47.871288   27481 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:39:47.881003   27481 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 16:39:47.881025   27481 kubeadm.go:626] restartCluster start
	I0725 16:39:47.881084   27481 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 16:39:47.890140   27481 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:39:47.890221   27481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:47.971040   27481 kubeconfig.go:92] found "kubernetes-upgrade-20220725163448-14919" server: "https://127.0.0.1:64040"
	I0725 16:39:47.971507   27481 kapi.go:59] client config for kubernetes-upgrade-20220725163448-14919: &rest.Config{Host:"https://127.0.0.1:64040", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kuber
netes-upgrade-20220725163448-14919/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 16:39:47.972047   27481 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 16:39:47.981567   27481 api_server.go:165] Checking apiserver status ...
	I0725 16:39:47.981625   27481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:39:47.995207   27481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3173/cgroup
	W0725 16:39:48.008294   27481 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/3173/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:39:48.008315   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:39:47.541388   27571 out.go:204] * Creating docker container (CPUs=2, Memory=5895MB) ...
	I0725 16:39:47.541781   27571 start.go:166] libmachine.API.Create for "NoKubernetes-20220725163945-14919" (driver="docker")
	I0725 16:39:47.541823   27571 client.go:168] LocalClient.Create starting
	I0725 16:39:47.541945   27571 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem
	I0725 16:39:47.542016   27571 main.go:134] libmachine: Decoding PEM data...
	I0725 16:39:47.542037   27571 main.go:134] libmachine: Parsing certificate...
	I0725 16:39:47.542134   27571 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem
	I0725 16:39:47.542183   27571 main.go:134] libmachine: Decoding PEM data...
	I0725 16:39:47.542197   27571 main.go:134] libmachine: Parsing certificate...
	I0725 16:39:47.543069   27571 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220725163945-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 16:39:47.618464   27571 cli_runner.go:211] docker network inspect NoKubernetes-20220725163945-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 16:39:47.618572   27571 network_create.go:272] running [docker network inspect NoKubernetes-20220725163945-14919] to gather additional debugging logs...
	I0725 16:39:47.618587   27571 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220725163945-14919
	W0725 16:39:47.692246   27571 cli_runner.go:211] docker network inspect NoKubernetes-20220725163945-14919 returned with exit code 1
	I0725 16:39:47.692267   27571 network_create.go:275] error running [docker network inspect NoKubernetes-20220725163945-14919]: docker network inspect NoKubernetes-20220725163945-14919: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: NoKubernetes-20220725163945-14919
	I0725 16:39:47.692277   27571 network_create.go:277] output of [docker network inspect NoKubernetes-20220725163945-14919]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: NoKubernetes-20220725163945-14919
	
	** /stderr **
	I0725 16:39:47.692390   27571 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 16:39:47.768823   27571 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006cc6b8] misses:0}
	I0725 16:39:47.768859   27571 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:39:47.768874   27571 network_create.go:115] attempt to create docker network NoKubernetes-20220725163945-14919 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 16:39:47.768945   27571 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 NoKubernetes-20220725163945-14919
	W0725 16:39:47.843495   27571 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 NoKubernetes-20220725163945-14919 returned with exit code 1
	W0725 16:39:47.843526   27571 network_create.go:107] failed to create docker network NoKubernetes-20220725163945-14919 192.168.49.0/24, will retry: subnet is taken
	I0725 16:39:47.843767   27571 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006cc6b8] amended:false}} dirty:map[] misses:0}
	I0725 16:39:47.843780   27571 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:39:47.843973   27571 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006cc6b8] amended:true}} dirty:map[192.168.49.0:0xc0006cc6b8 192.168.58.0:0xc00074e3b0] misses:0}
	I0725 16:39:47.843985   27571 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:39:47.843991   27571 network_create.go:115] attempt to create docker network NoKubernetes-20220725163945-14919 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0725 16:39:47.844062   27571 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 NoKubernetes-20220725163945-14919
	W0725 16:39:47.917623   27571 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 NoKubernetes-20220725163945-14919 returned with exit code 1
	W0725 16:39:47.917657   27571 network_create.go:107] failed to create docker network NoKubernetes-20220725163945-14919 192.168.58.0/24, will retry: subnet is taken
	I0725 16:39:47.917932   27571 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006cc6b8] amended:true}} dirty:map[192.168.49.0:0xc0006cc6b8 192.168.58.0:0xc00074e3b0] misses:1}
	I0725 16:39:47.917945   27571 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:39:47.918149   27571 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006cc6b8] amended:true}} dirty:map[192.168.49.0:0xc0006cc6b8 192.168.58.0:0xc00074e3b0 192.168.67.0:0xc0007969d8] misses:1}
	I0725 16:39:47.918160   27571 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:39:47.918167   27571 network_create.go:115] attempt to create docker network NoKubernetes-20220725163945-14919 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0725 16:39:47.918237   27571 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 NoKubernetes-20220725163945-14919
	I0725 16:39:48.032561   27571 network_create.go:99] docker network NoKubernetes-20220725163945-14919 192.168.67.0/24 created
	I0725 16:39:48.032598   27571 kic.go:106] calculated static IP "192.168.67.2" for the "NoKubernetes-20220725163945-14919" container
	I0725 16:39:48.032684   27571 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 16:39:48.105448   27571 cli_runner.go:164] Run: docker volume create NoKubernetes-20220725163945-14919 --label name.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 --label created_by.minikube.sigs.k8s.io=true
	I0725 16:39:48.177154   27571 oci.go:103] Successfully created a docker volume NoKubernetes-20220725163945-14919
	I0725 16:39:48.177270   27571 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-20220725163945-14919-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 --entrypoint /usr/bin/test -v NoKubernetes-20220725163945-14919:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 16:39:48.656784   27571 oci.go:107] Successfully prepared a docker volume NoKubernetes-20220725163945-14919
	I0725 16:39:48.656823   27571 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:39:48.656835   27571 kic.go:179] Starting extracting preloaded images to volume ...
	I0725 16:39:48.656962   27571 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-20220725163945-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 16:39:50.713155   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 16:39:50.713223   27481 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:64040/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 16:39:50.976418   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:39:51.167841   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:39:51.167870   27481 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:64040/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:39:51.549285   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:39:51.556367   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:39:51.556389   27481 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:64040/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:39:51.979330   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:39:51.986603   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 200:
	ok
	I0725 16:39:52.008885   27481 system_pods.go:86] 5 kube-system pods found
	I0725 16:39:52.008907   27481 system_pods.go:89] "etcd-kubernetes-upgrade-20220725163448-14919" [a898a2bc-718a-4acb-919e-4c2962311ba8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 16:39:52.008922   27481 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-20220725163448-14919" [15f29553-ca1a-4e8c-b546-f7e9bab78551] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 16:39:52.008932   27481 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-20220725163448-14919" [ad63420b-21d9-4d73-9219-ee7ef0c6ab43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 16:39:52.008939   27481 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-20220725163448-14919" [c282544a-9856-4f25-a146-a078016de046] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 16:39:52.008947   27481 system_pods.go:89] "storage-provisioner" [93486e0f-0a55-40a4-ad45-126afcb75692] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0725 16:39:52.008955   27481 kubeadm.go:610] needs reconfigure: missing components: kube-dns, kube-proxy
	I0725 16:39:52.008965   27481 kubeadm.go:1092] stopping kube-system containers ...
	I0725 16:39:52.009034   27481 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:39:52.060566   27481 docker.go:443] Stopping containers: [101d55ddf1f5 b3ab25c163b3 6fdc3048f70a 4dee689ae2b1 4316c24e8125 6d2e0a320963 2d02e5cd4214 a0d6c137e89f 44f819e3f58e 6af6492d6f4b 6c82a96c49ce 025c4d1e85c6 d333b261d3f4 5a5e7a0a1d4f 6bf6d9fe605d 26dda4f9c92a f8fd503c82ad 2899c4d28ccb]
	I0725 16:39:52.060641   27481 ssh_runner.go:195] Run: docker stop 101d55ddf1f5 b3ab25c163b3 6fdc3048f70a 4dee689ae2b1 4316c24e8125 6d2e0a320963 2d02e5cd4214 a0d6c137e89f 44f819e3f58e 6af6492d6f4b 6c82a96c49ce 025c4d1e85c6 d333b261d3f4 5a5e7a0a1d4f 6bf6d9fe605d 26dda4f9c92a f8fd503c82ad 2899c4d28ccb
	I0725 16:39:53.375740   27481 ssh_runner.go:235] Completed: docker stop 101d55ddf1f5 b3ab25c163b3 6fdc3048f70a 4dee689ae2b1 4316c24e8125 6d2e0a320963 2d02e5cd4214 a0d6c137e89f 44f819e3f58e 6af6492d6f4b 6c82a96c49ce 025c4d1e85c6 d333b261d3f4 5a5e7a0a1d4f 6bf6d9fe605d 26dda4f9c92a f8fd503c82ad 2899c4d28ccb: (1.315052586s)
	I0725 16:39:53.375811   27481 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 16:39:53.458189   27481 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:39:53.471205   27481 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5759 Jul 25 23:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5795 Jul 25 23:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5963 Jul 25 23:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5747 Jul 25 23:37 /etc/kubernetes/scheduler.conf
	
	I0725 16:39:53.471265   27481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 16:39:53.484309   27481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 16:39:53.499139   27481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 16:39:53.508911   27481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 16:39:53.526478   27481 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:39:53.539655   27481 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 16:39:53.539669   27481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:39:53.597925   27481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:39:53.171000   27571 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-20220725163945-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (4.513885982s)
	I0725 16:39:53.171033   27571 kic.go:188] duration metric: took 4.514146 seconds to extract preloaded images to volume
	I0725 16:39:53.171280   27571 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 16:39:53.334727   27571 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-20220725163945-14919 --name NoKubernetes-20220725163945-14919 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-20220725163945-14919 --network NoKubernetes-20220725163945-14919 --ip 192.168.67.2 --volume NoKubernetes-20220725163945-14919:/var --security-opt apparmor=unconfined --memory=5895mb --memory-swap=5895mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 16:39:53.763452   27571 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220725163945-14919 --format={{.State.Running}}
	I0725 16:39:53.850345   27571 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220725163945-14919 --format={{.State.Status}}
	I0725 16:39:53.934114   27571 cli_runner.go:164] Run: docker exec NoKubernetes-20220725163945-14919 stat /var/lib/dpkg/alternatives/iptables
	I0725 16:39:54.116802   27571 oci.go:144] the created container "NoKubernetes-20220725163945-14919" has a running status.
	I0725 16:39:54.116938   27571 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/NoKubernetes-20220725163945-14919/id_rsa...
	I0725 16:39:54.174303   27571 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/NoKubernetes-20220725163945-14919/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 16:39:54.305669   27571 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220725163945-14919 --format={{.State.Status}}
	I0725 16:39:54.386280   27571 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 16:39:54.386296   27571 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-20220725163945-14919 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 16:39:54.514956   27571 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220725163945-14919 --format={{.State.Status}}
	I0725 16:39:54.598070   27571 machine.go:88] provisioning docker machine ...
	I0725 16:39:54.598111   27571 ubuntu.go:169] provisioning hostname "NoKubernetes-20220725163945-14919"
	I0725 16:39:54.598242   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:54.679672   27571 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:54.679852   27571 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64287 <nil> <nil>}
	I0725 16:39:54.679865   27571 main.go:134] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-20220725163945-14919 && echo "NoKubernetes-20220725163945-14919" | sudo tee /etc/hostname
	I0725 16:39:54.811454   27571 main.go:134] libmachine: SSH cmd err, output: <nil>: NoKubernetes-20220725163945-14919
	
	I0725 16:39:54.811571   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:54.885689   27571 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:54.885847   27571 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64287 <nil> <nil>}
	I0725 16:39:54.885860   27571 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-20220725163945-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-20220725163945-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-20220725163945-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:39:55.005973   27571 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:39:55.005985   27571 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:39:55.006007   27571 ubuntu.go:177] setting up certificates
	I0725 16:39:55.006013   27571 provision.go:83] configureAuth start
	I0725 16:39:55.006081   27571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220725163945-14919
	I0725 16:39:55.080462   27571 provision.go:138] copyHostCerts
	I0725 16:39:55.080560   27571 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:39:55.080572   27571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:39:55.080693   27571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:39:55.080919   27571 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:39:55.080926   27571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:39:55.080993   27571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:39:55.081138   27571 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:39:55.081163   27571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:39:55.081232   27571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:39:55.081359   27571 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-20220725163945-14919 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube NoKubernetes-20220725163945-14919]
	I0725 16:39:55.340339   27571 provision.go:172] copyRemoteCerts
	I0725 16:39:55.340391   27571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:39:55.340432   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:55.427651   27571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64287 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/NoKubernetes-20220725163945-14919/id_rsa Username:docker}
	I0725 16:39:55.518244   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:39:55.535735   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0725 16:39:55.553476   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 16:39:55.575039   27571 provision.go:86] duration metric: configureAuth took 569.010183ms
	I0725 16:39:55.575048   27571 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:39:55.575192   27571 config.go:178] Loaded profile config "NoKubernetes-20220725163945-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:39:55.575258   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:55.658015   27571 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:55.658183   27571 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64287 <nil> <nil>}
	I0725 16:39:55.658194   27571 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:39:55.783305   27571 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:39:55.783314   27571 ubuntu.go:71] root file system type: overlay
	I0725 16:39:55.783642   27571 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:39:55.783784   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:55.863676   27571 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:55.863838   27571 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64287 <nil> <nil>}
	I0725 16:39:55.863884   27571 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:39:56.003029   27571 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:39:56.003117   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:56.086475   27571 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:56.086626   27571 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64287 <nil> <nil>}
	I0725 16:39:56.086638   27571 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:39:56.721315   27571 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:39:56.013413608 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 16:39:56.721334   27571 machine.go:91] provisioned docker machine in 2.1232251s
	I0725 16:39:56.721339   27571 client.go:171] LocalClient.Create took 9.17942811s
	I0725 16:39:56.721356   27571 start.go:174] duration metric: libmachine.API.Create for "NoKubernetes-20220725163945-14919" took 9.179490955s
	I0725 16:39:56.721365   27571 start.go:307] post-start starting for "NoKubernetes-20220725163945-14919" (driver="docker")
	I0725 16:39:56.721371   27571 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:39:56.721434   27571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:39:56.721490   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:56.801234   27571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64287 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/NoKubernetes-20220725163945-14919/id_rsa Username:docker}
	I0725 16:39:56.893997   27571 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:39:56.897937   27571 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:39:56.897949   27571 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:39:56.897955   27571 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:39:56.897960   27571 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:39:56.897968   27571 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:39:56.898071   27571 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:39:56.898252   27571 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:39:56.898413   27571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:39:56.905544   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:39:56.922681   27571 start.go:310] post-start completed in 201.308262ms
	I0725 16:39:56.923381   27571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220725163945-14919
	I0725 16:39:57.000116   27571 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/config.json ...
	I0725 16:39:57.021517   27571 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:39:57.021584   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:57.101141   27571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64287 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/NoKubernetes-20220725163945-14919/id_rsa Username:docker}
	I0725 16:39:57.184930   27571 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:39:57.190781   27571 start.go:135] duration metric: createHost completed in 9.693061878s
	I0725 16:39:57.190795   27571 start.go:82] releasing machines lock for "NoKubernetes-20220725163945-14919", held for 9.693156674s
	I0725 16:39:57.190876   27571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220725163945-14919
	I0725 16:39:57.267043   27571 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:39:57.267047   27571 ssh_runner.go:195] Run: systemctl --version
	I0725 16:39:57.267133   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:57.267158   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:57.352122   27571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64287 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/NoKubernetes-20220725163945-14919/id_rsa Username:docker}
	I0725 16:39:57.355449   27571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64287 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/NoKubernetes-20220725163945-14919/id_rsa Username:docker}
	I0725 16:39:57.436475   27571 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:39:57.664058   27571 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:39:57.664120   27571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:39:57.676180   27571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:39:57.691618   27571 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:39:57.762743   27571 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:39:57.832433   27571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:39:57.907662   27571 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:39:58.133020   27571 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 16:39:58.202902   27571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:39:58.272317   27571 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 16:39:58.284370   27571 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 16:39:58.284443   27571 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 16:39:58.289232   27571 start.go:471] Will wait 60s for crictl version
	I0725 16:39:58.289289   27571 ssh_runner.go:195] Run: sudo crictl version
	I0725 16:39:58.396637   27571 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 16:39:58.396711   27571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:39:58.436403   27571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:39:54.194133   27481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:39:54.423643   27481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:39:54.487822   27481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:39:54.565101   27481 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:39:54.565192   27481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:39:55.079325   27481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:39:55.579516   27481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:39:55.593195   27481 api_server.go:71] duration metric: took 1.028089675s to wait for apiserver process to appear ...
	I0725 16:39:55.593214   27481 api_server.go:87] waiting for apiserver healthz status ...
	I0725 16:39:55.593224   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:39:58.845695   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 16:39:58.845721   27481 api_server.go:102] status: https://127.0.0.1:64040/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 16:39:58.498483   27571 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 16:39:58.498582   27571 cli_runner.go:164] Run: docker exec -t NoKubernetes-20220725163945-14919 dig +short host.docker.internal
	I0725 16:39:58.640235   27571 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:39:58.640328   27571 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:39:58.644867   27571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:39:58.655395   27571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" NoKubernetes-20220725163945-14919
	I0725 16:39:58.733223   27571 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:39:58.733283   27571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:39:58.765518   27571 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 16:39:58.765534   27571 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:39:58.765600   27571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:39:58.796264   27571 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 16:39:58.796276   27571 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:39:58.796391   27571 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:39:58.913554   27571 cni.go:95] Creating CNI manager for ""
	I0725 16:39:58.913562   27571 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:39:58.913575   27571 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:39:58.913588   27571 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-20220725163945-14919 NodeName:NoKubernetes-20220725163945-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:39:58.913694   27571 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "NoKubernetes-20220725163945-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:39:58.913771   27571 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=NoKubernetes-20220725163945-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:NoKubernetes-20220725163945-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:39:58.913821   27571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 16:39:58.921898   27571 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:39:58.921989   27571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:39:58.929480   27571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (495 bytes)
	I0725 16:39:58.942463   27571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:39:58.959123   27571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2055 bytes)
	I0725 16:39:58.977284   27571 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:39:58.982468   27571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:39:58.995563   27571 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919 for IP: 192.168.67.2
	I0725 16:39:58.995673   27571 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:39:58.995725   27571 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:39:58.995762   27571 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/client.key
	I0725 16:39:58.995772   27571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/client.crt with IP's: []
	I0725 16:39:59.127202   27571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/client.crt ...
	I0725 16:39:59.127215   27571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/client.crt: {Name:mk1713decc0ebb8a06321d82d43344d0ea52c642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:39:59.127520   27571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/client.key ...
	I0725 16:39:59.127524   27571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/client.key: {Name:mk3b3961466da4685020510bc59f2e435d334c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:39:59.127724   27571 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.key.c7fa3a9e
	I0725 16:39:59.127740   27571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 16:39:59.232201   27571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.crt.c7fa3a9e ...
	I0725 16:39:59.232208   27571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.crt.c7fa3a9e: {Name:mk5fc142afd06d2f175cdcb252cdad8166356284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:39:59.232464   27571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.key.c7fa3a9e ...
	I0725 16:39:59.232469   27571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.key.c7fa3a9e: {Name:mk9e8ce80c9877e965ed43b3c49e44e6b1d8a158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:39:59.232655   27571 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.crt
	I0725 16:39:59.232854   27571 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.key
	I0725 16:39:59.233046   27571 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/proxy-client.key
	I0725 16:39:59.233059   27571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/proxy-client.crt with IP's: []
	I0725 16:39:59.264380   27571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/proxy-client.crt ...
	I0725 16:39:59.264385   27571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/proxy-client.crt: {Name:mk1ff172808f0f980239947aa6ecd787259b288d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:39:59.264604   27571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/proxy-client.key ...
	I0725 16:39:59.264609   27571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/proxy-client.key: {Name:mkca2216b3cd816f8472093c2b50e79961e5053e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:39:59.264966   27571 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:39:59.265004   27571 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:39:59.265011   27571 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:39:59.265037   27571 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:39:59.265062   27571 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:39:59.265089   27571 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:39:59.265146   27571 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:39:59.265595   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:39:59.284364   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 16:39:59.302011   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:39:59.318716   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/NoKubernetes-20220725163945-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 16:39:59.335499   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:39:59.354374   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:39:59.372703   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:39:59.390891   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:39:59.409199   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:39:59.426396   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:39:59.444109   27571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:39:59.462298   27571 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:39:59.474788   27571 ssh_runner.go:195] Run: openssl version
	I0725 16:39:59.480140   27571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:39:59.488031   27571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:39:59.492580   27571 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:39:59.492624   27571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:39:59.498120   27571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:39:59.505703   27571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:39:59.513850   27571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:39:59.518140   27571 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:39:59.518182   27571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:39:59.523311   27571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:39:59.530793   27571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:39:59.538435   27571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:39:59.542559   27571 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:39:59.542596   27571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:39:59.547603   27571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:39:59.557862   27571 kubeadm.go:395] StartCluster: {Name:NoKubernetes-20220725163945-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:NoKubernetes-20220725163945-14919 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:39:59.557974   27571 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:39:59.597808   27571 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:39:59.607309   27571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:39:59.614454   27571 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:39:59.614498   27571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:39:59.622998   27571 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:39:59.623015   27571 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:39:59.346019   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:39:59.353874   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 16:39:59.353891   27481 api_server.go:102] status: https://127.0.0.1:64040/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:39:59.845830   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:39:59.851193   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 16:39:59.851209   27481 api_server.go:102] status: https://127.0.0.1:64040/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:40:00.345901   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:40:00.353216   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 200:
	ok
	I0725 16:40:00.359510   27481 api_server.go:140] control plane version: v1.24.3
	I0725 16:40:00.359525   27481 api_server.go:130] duration metric: took 4.766262273s to wait for apiserver health ...
	I0725 16:40:00.359531   27481 cni.go:95] Creating CNI manager for ""
	I0725 16:40:00.359536   27481 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:40:00.359541   27481 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 16:40:00.364300   27481 system_pods.go:59] 5 kube-system pods found
	I0725 16:40:00.364315   27481 system_pods.go:61] "etcd-kubernetes-upgrade-20220725163448-14919" [a898a2bc-718a-4acb-919e-4c2962311ba8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 16:40:00.364324   27481 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220725163448-14919" [15f29553-ca1a-4e8c-b546-f7e9bab78551] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 16:40:00.364332   27481 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220725163448-14919" [ad63420b-21d9-4d73-9219-ee7ef0c6ab43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 16:40:00.364338   27481 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220725163448-14919" [c282544a-9856-4f25-a146-a078016de046] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 16:40:00.364343   27481 system_pods.go:61] "storage-provisioner" [93486e0f-0a55-40a4-ad45-126afcb75692] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0725 16:40:00.364346   27481 system_pods.go:74] duration metric: took 4.802472ms to wait for pod list to return data ...
	I0725 16:40:00.364354   27481 node_conditions.go:102] verifying NodePressure condition ...
	I0725 16:40:00.367693   27481 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 16:40:00.367708   27481 node_conditions.go:123] node cpu capacity is 6
	I0725 16:40:00.367717   27481 node_conditions.go:105] duration metric: took 3.359041ms to run NodePressure ...
	I0725 16:40:00.367729   27481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:40:00.486300   27481 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 16:40:00.495565   27481 ops.go:34] apiserver oom_adj: -16
	I0725 16:40:00.495575   27481 kubeadm.go:630] restartCluster took 12.614428502s
	I0725 16:40:00.495582   27481 kubeadm.go:397] StartCluster complete in 12.691513922s
	I0725 16:40:00.495598   27481 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:40:00.495682   27481 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:40:00.496223   27481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:40:00.496903   27481 kapi.go:59] client config for kubernetes-upgrade-20220725163448-14919: &rest.Config{Host:"https://127.0.0.1:64040", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kuber
netes-upgrade-20220725163448-14919/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 16:40:00.499653   27481 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220725163448-14919" rescaled to 1
	I0725 16:40:00.499689   27481 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 16:40:00.499706   27481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 16:40:00.499720   27481 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0725 16:40:00.521052   27481 out.go:177] * Verifying Kubernetes components...
	I0725 16:40:00.499863   27481 config.go:178] Loaded profile config "kubernetes-upgrade-20220725163448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:40:00.521103   27481 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220725163448-14919"
	I0725 16:40:00.521104   27481 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220725163448-14919"
	I0725 16:40:00.557949   27481 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0725 16:40:00.579680   27481 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220725163448-14919"
	W0725 16:40:00.579711   27481 addons.go:162] addon storage-provisioner should already be in state true
	I0725 16:40:00.579681   27481 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220725163448-14919"
	I0725 16:40:00.579690   27481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:40:00.579766   27481 host.go:66] Checking if "kubernetes-upgrade-20220725163448-14919" exists ...
	I0725 16:40:00.580077   27481 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Status}}
	I0725 16:40:00.580157   27481 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Status}}
	I0725 16:40:00.592428   27481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:40:00.693633   27481 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 16:40:00.714316   27481 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 16:40:00.714327   27481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 16:40:00.714396   27481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:40:00.717457   27481 kapi.go:59] client config for kubernetes-upgrade-20220725163448-14919: &rest.Config{Host:"https://127.0.0.1:64040", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kuber
netes-upgrade-20220725163448-14919/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 16:40:00.720045   27481 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:40:00.720105   27481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:40:00.725076   27481 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220725163448-14919"
	W0725 16:40:00.725090   27481 addons.go:162] addon default-storageclass should already be in state true
	I0725 16:40:00.725108   27481 host.go:66] Checking if "kubernetes-upgrade-20220725163448-14919" exists ...
	I0725 16:40:00.725478   27481 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Status}}
	I0725 16:40:00.733308   27481 api_server.go:71] duration metric: took 233.570863ms to wait for apiserver process to appear ...
	I0725 16:40:00.733336   27481 api_server.go:87] waiting for apiserver healthz status ...
	I0725 16:40:00.733351   27481 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64040/healthz ...
	I0725 16:40:00.740082   27481 api_server.go:266] https://127.0.0.1:64040/healthz returned 200:
	ok
	I0725 16:40:00.741760   27481 api_server.go:140] control plane version: v1.24.3
	I0725 16:40:00.741772   27481 api_server.go:130] duration metric: took 8.431765ms to wait for apiserver health ...
	I0725 16:40:00.741779   27481 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 16:40:00.746625   27481 system_pods.go:59] 5 kube-system pods found
	I0725 16:40:00.746650   27481 system_pods.go:61] "etcd-kubernetes-upgrade-20220725163448-14919" [a898a2bc-718a-4acb-919e-4c2962311ba8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 16:40:00.746666   27481 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220725163448-14919" [15f29553-ca1a-4e8c-b546-f7e9bab78551] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 16:40:00.746674   27481 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220725163448-14919" [ad63420b-21d9-4d73-9219-ee7ef0c6ab43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 16:40:00.746680   27481 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220725163448-14919" [c282544a-9856-4f25-a146-a078016de046] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 16:40:00.746691   27481 system_pods.go:61] "storage-provisioner" [93486e0f-0a55-40a4-ad45-126afcb75692] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0725 16:40:00.746697   27481 system_pods.go:74] duration metric: took 4.913006ms to wait for pod list to return data ...
	I0725 16:40:00.746704   27481 kubeadm.go:572] duration metric: took 246.994114ms to wait for : map[apiserver:true system_pods:true] ...
	I0725 16:40:00.746714   27481 node_conditions.go:102] verifying NodePressure condition ...
	I0725 16:40:00.749784   27481 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 16:40:00.749806   27481 node_conditions.go:123] node cpu capacity is 6
	I0725 16:40:00.749826   27481 node_conditions.go:105] duration metric: took 3.107235ms to run NodePressure ...
	I0725 16:40:00.749835   27481 start.go:216] waiting for startup goroutines ...
	I0725 16:40:00.802465   27481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64036 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:40:00.809713   27481 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 16:40:00.809728   27481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 16:40:00.809795   27481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:40:00.893527   27481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64036 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:40:00.904795   27481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 16:40:01.020072   27481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 16:40:01.615264   27481 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0725 16:39:59.938202   27571 out.go:204]   - Generating certificates and keys ...
	I0725 16:40:01.635909   27481 addons.go:414] enableAddons completed in 1.136188982s
	I0725 16:40:01.690136   27481 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 16:40:01.711768   27481 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220725163448-14919" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:39:05 UTC, end at Mon 2022-07-25 23:40:03 UTC. --
	Jul 25 23:39:44 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:44.821229462Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 23:39:44 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:44.821235510Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 23:39:44 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:44.826270235Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 25 23:39:44 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:44.833277546Z" level=info msg="Loading containers: start."
	Jul 25 23:39:44 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:44.913280841Z" level=info msg="ignoring event" container=6c82a96c49ce3644cdf51ae10bc8e92bdd1e7148fe9a27807f12f27c9e09c7d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:44 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:44.996791574Z" level=info msg="ignoring event" container=6af6492d6f4bbfe2f28c1bb11529fb33b463cc3ea90e5f4dbdf87cf1e22b7003 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.165565186Z" level=info msg="Removing stale sandbox beaa02d052aa845f715eb49a85b25a37cbc5835a683c86646337a45f95b0eb85 (44f819e3f58eb628db71862d6a332086eb73a2d5aac3c11019e56be81e7dac41)"
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.166975046Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 47db9de37a5b67c000cdefc55925e6c97bc31451efd28c8c4410089f5fef4c56 b3c80635de8a8bcb57991128e8274d5ab1b524b16531e54b61149240831ac6cd], retrying...."
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.261982819Z" level=info msg="Removing stale sandbox 72af184122de81279e0e6308bfd6709ea985b60c00a0c56b0b01e6965642f84f (6c82a96c49ce3644cdf51ae10bc8e92bdd1e7148fe9a27807f12f27c9e09c7d4)"
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.263432131Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 47db9de37a5b67c000cdefc55925e6c97bc31451efd28c8c4410089f5fef4c56 d3dafaa42f23e037e67f7876f81be8f8ee195d928f70f2a830bf1c02a57b79b7], retrying...."
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.287047859Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.328148769Z" level=info msg="Loading containers: done."
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.341513652Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.341584316Z" level=info msg="Daemon has completed initialization"
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 systemd[1]: Started Docker Application Container Engine.
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.366893288Z" level=info msg="API listen on [::]:2376"
	Jul 25 23:39:45 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:45.374767868Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 23:39:52 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:52.196700465Z" level=info msg="ignoring event" container=2d02e5cd421412311642159be9f3db2de208487846e8ba0f683bd7fecf2351bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:52 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:52.198313089Z" level=info msg="ignoring event" container=4316c24e8125ab1f0c0ead93d5f722f8ff467c18e7f97a5b3cd838c025e07ebc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:52 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:52.217122394Z" level=info msg="ignoring event" container=101d55ddf1f5985f23b1821008372bd75a8bcbba5be639b941169c7e554618d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:52 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:52.265318682Z" level=info msg="ignoring event" container=6d2e0a320963a40ec9274d6a856b1803269f3e54b516f489afb446343039aa3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:52 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:52.267354059Z" level=info msg="ignoring event" container=a0d6c137e89f6fd4641d5fbe714d819a78853518d6744c7b09cdc261e59a05d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:52 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:52.269897550Z" level=info msg="ignoring event" container=b3ab25c163b37edd5faace24ae130ce65482a0267500ba0b49b3d3ee9f053ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:53 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:53.206210477Z" level=info msg="ignoring event" container=4dee689ae2b14126ecdaff612f506ab04a5eaa8df9594902562bf1a394b09186 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:39:53 kubernetes-upgrade-20220725163448-14919 dockerd[2508]: time="2022-07-25T23:39:53.313597071Z" level=info msg="ignoring event" container=6fdc3048f70af11e2efd3ac87ccf40bf912e154e124753f5022e05b0dba7cb35 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9d58a46bb45df       aebe758cef4cd       8 seconds ago       Running             etcd                      2                   54313145a4da9
	2c544cb85a653       3a5aa3a515f5d       8 seconds ago       Running             kube-scheduler            3                   30aafa31b9339
	b9666a3e3ff43       586c112956dfc       8 seconds ago       Running             kube-controller-manager   2                   73fc4cf0cdab1
	2558da31b758d       d521dd763e2e3       8 seconds ago       Running             kube-apiserver            2                   02e851f03ba11
	101d55ddf1f59       586c112956dfc       17 seconds ago      Exited              kube-controller-manager   1                   4316c24e8125a
	b3ab25c163b37       aebe758cef4cd       17 seconds ago      Exited              etcd                      1                   a0d6c137e89f6
	6fdc3048f70af       d521dd763e2e3       17 seconds ago      Exited              kube-apiserver            1                   6d2e0a320963a
	4dee689ae2b14       3a5aa3a515f5d       17 seconds ago      Exited              kube-scheduler            2                   2d02e5cd42141
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220725163448-14919
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220725163448-14919
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 23:39:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220725163448-14919
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 23:39:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 23:39:58 +0000   Mon, 25 Jul 2022 23:39:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 23:39:58 +0000   Mon, 25 Jul 2022 23:39:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 23:39:58 +0000   Mon, 25 Jul 2022 23:39:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 23:39:58 +0000   Mon, 25 Jul 2022 23:39:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-20220725163448-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                aaf9326f-4156-45ff-ad17-0ea18bffbf47
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220725163448-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         32s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220725163448-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220725163448-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220725163448-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 42s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)  kubelet  Node kubernetes-upgrade-20220725163448-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet  Node kubernetes-upgrade-20220725163448-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 42s)  kubelet  Node kubernetes-upgrade-20220725163448-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-20220725163448-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-20220725163448-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet  Node kubernetes-upgrade-20220725163448-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001693] FS-Cache: O-key=[8] '4b21e30300000000'
	[  +0.001143] FS-Cache: N-cookie c=00000000a437b4cf [p=0000000011de058a fl=2 nc=0 na=1]
	[  +0.001815] FS-Cache: N-cookie d=00000000f8b35d0a n=00000000f753d2b0
	[  +0.001563] FS-Cache: N-key=[8] '4b21e30300000000'
	[  +0.002293] FS-Cache: Duplicate cookie detected
	[  +0.001068] FS-Cache: O-cookie c=000000005465244a [p=0000000011de058a fl=226 nc=0 na=1]
	[  +0.002019] FS-Cache: O-cookie d=00000000f8b35d0a n=0000000034ab7a4f
	[  +0.001717] FS-Cache: O-key=[8] '4b21e30300000000'
	[  +0.001378] FS-Cache: N-cookie c=00000000a437b4cf [p=0000000011de058a fl=2 nc=0 na=1]
	[  +0.002284] FS-Cache: N-cookie d=00000000f8b35d0a n=000000007b4e0c1a
	[  +0.001579] FS-Cache: N-key=[8] '4b21e30300000000'
	[  +4.146243] FS-Cache: Duplicate cookie detected
	[  +0.001146] FS-Cache: O-cookie c=000000007cbfde7b [p=0000000011de058a fl=226 nc=0 na=1]
	[  +0.001777] FS-Cache: O-cookie d=00000000f8b35d0a n=0000000086804b65
	[  +0.001765] FS-Cache: O-key=[8] '4a21e30300000000'
	[  +0.001116] FS-Cache: N-cookie c=00000000a437b4cf [p=0000000011de058a fl=2 nc=0 na=1]
	[  +0.001777] FS-Cache: N-cookie d=00000000f8b35d0a n=000000007b4e0c1a
	[  +0.001461] FS-Cache: N-key=[8] '4a21e30300000000'
	[  +0.500906] FS-Cache: Duplicate cookie detected
	[  +0.001416] FS-Cache: O-cookie c=00000000d2bf1a30 [p=0000000011de058a fl=226 nc=0 na=1]
	[  +0.001824] FS-Cache: O-cookie d=00000000f8b35d0a n=000000006a798d13
	[  +0.001465] FS-Cache: O-key=[8] '5221e30300000000'
	[  +0.001130] FS-Cache: N-cookie c=00000000a437b4cf [p=0000000011de058a fl=2 nc=0 na=1]
	[  +0.001784] FS-Cache: N-cookie d=00000000f8b35d0a n=00000000e6dd0465
	[  +0.001456] FS-Cache: N-key=[8] '5221e30300000000'
	
	* 
	* ==> etcd [9d58a46bb45d] <==
	* {"level":"info","ts":"2022-07-25T23:39:55.396Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-25T23:39:55.396Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-25T23:39:55.397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-25T23:39:55.398Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T23:39:55.398Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T23:39:55.398Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T23:39:55.397Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T23:39:55.398Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:39:55.398Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:39:55.398Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T23:39:55.398Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220725163448-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:39:56.990Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:39:56.991Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T23:39:56.991Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T23:39:56.992Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T23:39:56.992Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [b3ab25c163b3] <==
	* {"level":"info","ts":"2022-07-25T23:39:48.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-07-25T23:39:48.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T23:39:48.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-07-25T23:39:48.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T23:39:48.596Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220725163448-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T23:39:48.596Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:39:48.596Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:39:48.597Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T23:39:48.597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T23:39:48.597Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T23:39:48.597Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T23:39:51.175Z","caller":"traceutil/trace.go:171","msg":"trace[952094038] linearizableReadLoop","detail":"{readStateIndex:320; appliedIndex:319; }","duration":"193.329476ms","start":"2022-07-25T23:39:50.981Z","end":"2022-07-25T23:39:51.175Z","steps":["trace[952094038] 'read index received'  (duration: 192.241438ms)","trace[952094038] 'applied index is now lower than readState.Index'  (duration: 1.087313ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-25T23:39:51.175Z","caller":"traceutil/trace.go:171","msg":"trace[130430021] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"281.96366ms","start":"2022-07-25T23:39:50.893Z","end":"2022-07-25T23:39:51.175Z","steps":["trace[130430021] 'process raft request'  (duration: 280.891342ms)"],"step_count":1}
	{"level":"warn","ts":"2022-07-25T23:39:51.175Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"193.487802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-07-25T23:39:51.175Z","caller":"traceutil/trace.go:171","msg":"trace[155665692] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:310; }","duration":"193.660973ms","start":"2022-07-25T23:39:50.981Z","end":"2022-07-25T23:39:51.175Z","steps":["trace[155665692] 'agreement among raft nodes before linearized reading'  (duration: 193.422566ms)"],"step_count":1}
	{"level":"warn","ts":"2022-07-25T23:39:51.175Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"184.114675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-07-25T23:39:51.175Z","caller":"traceutil/trace.go:171","msg":"trace[1320276176] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:310; }","duration":"184.403796ms","start":"2022-07-25T23:39:50.991Z","end":"2022-07-25T23:39:51.175Z","steps":["trace[1320276176] 'agreement among raft nodes before linearized reading'  (duration: 184.080975ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-25T23:39:52.130Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-25T23:39:52.130Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-20220725163448-14919","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/07/25 23:39:52 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/25 23:39:52 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-25T23:39:52.141Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-07-25T23:39:52.142Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T23:39:52.144Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T23:39:52.144Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-20220725163448-14919","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  23:40:04 up 46 min,  0 users,  load average: 1.83, 1.37, 1.00
	Linux kubernetes-upgrade-20220725163448-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2558da31b758] <==
	* I0725 23:39:58.863070       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0725 23:39:58.862841       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 23:39:58.863116       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 23:39:58.862885       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0725 23:39:58.862913       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 23:39:58.862921       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 23:39:58.866268       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0725 23:39:58.866276       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E0725 23:39:58.896362       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0725 23:39:58.963334       1 cache.go:39] Caches are synced for autoregister controller
	I0725 23:39:58.963515       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 23:39:58.963671       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 23:39:58.964011       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 23:39:58.964024       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 23:39:58.966344       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0725 23:39:58.966456       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0725 23:39:58.973058       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 23:39:59.014116       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 23:39:59.594644       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 23:39:59.867852       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 23:40:00.449879       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 23:40:00.455631       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 23:40:00.478702       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 23:40:00.490428       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 23:40:00.494990       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [6fdc3048f70a] <==
	* W0725 23:39:53.135870       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.135881       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.135901       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.135902       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.135941       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.135997       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136044       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136086       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136104       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136127       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136151       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136137       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136169       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136174       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136188       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136190       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136220       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136196       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136203       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136215       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136240       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136227       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136243       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136263       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 23:39:53.136345       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [101d55ddf1f5] <==
	* I0725 23:39:48.005558       1 serving.go:348] Generated self-signed cert in-memory
	I0725 23:39:48.475060       1 controllermanager.go:180] Version: v1.24.3
	I0725 23:39:48.475096       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:39:48.475830       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 23:39:48.476010       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0725 23:39:48.476112       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 23:39:48.476288       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [b9666a3e3ff4] <==
	* I0725 23:39:56.325512       1 serving.go:348] Generated self-signed cert in-memory
	I0725 23:39:56.615273       1 controllermanager.go:180] Version: v1.24.3
	I0725 23:39:56.615314       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:39:56.616261       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 23:39:56.616284       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0725 23:39:56.616452       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 23:39:56.616512       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 23:40:01.005227       1 shared_informer.go:255] Waiting for caches to sync for tokens
	I0725 23:40:01.072731       1 controllermanager.go:593] Started "namespace"
	I0725 23:40:01.072901       1 namespace_controller.go:200] Starting namespace controller
	I0725 23:40:01.072928       1 shared_informer.go:255] Waiting for caches to sync for namespace
	I0725 23:40:01.076307       1 node_ipam_controller.go:91] Sending events to api server.
	I0725 23:40:01.105825       1 shared_informer.go:262] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [2c544cb85a65] <==
	* I0725 23:39:56.291690       1 serving.go:348] Generated self-signed cert in-memory
	I0725 23:39:58.907986       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0725 23:39:58.908183       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:39:58.911394       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 23:39:58.911570       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0725 23:39:58.911678       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0725 23:39:58.911725       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 23:39:58.913453       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0725 23:39:58.917979       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 23:39:58.914327       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 23:39:58.918404       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 23:39:59.012542       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0725 23:39:59.019051       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 23:39:59.019056       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [4dee689ae2b1] <==
	* I0725 23:39:48.262908       1 serving.go:348] Generated self-signed cert in-memory
	W0725 23:39:50.745159       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 23:39:50.763524       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 23:39:50.763580       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 23:39:50.763591       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 23:39:50.783612       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0725 23:39:50.783737       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:39:50.786023       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 23:39:50.786124       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 23:39:50.786137       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 23:39:50.786155       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 23:39:50.887715       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 23:39:52.174048       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 23:39:52.174427       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 23:39:52.175678       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:39:05 UTC, end at Mon 2022-07-25 23:40:05 UTC. --
	Jul 25 23:39:56 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:56.805229    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:56 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:56.905752    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.006310    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.106636    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.207329    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.308305    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.409115    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.510050    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.610630    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.711421    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.812430    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:57 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:57.912507    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.013435    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.113899    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.214616    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.314835    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.416017    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.516162    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.616644    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.717800    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: E0725 23:39:58.818948    3860 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725163448-14919\" not found"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: I0725 23:39:58.971817    3860 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220725163448-14919"
	Jul 25 23:39:58 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: I0725 23:39:58.972011    3860 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220725163448-14919"
	Jul 25 23:39:59 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: I0725 23:39:59.566413    3860 apiserver.go:52] "Watching apiserver"
	Jul 25 23:39:59 kubernetes-upgrade-20220725163448-14919 kubelet[3860]: I0725 23:39:59.729982    3860 reconciler.go:157] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220725163448-14919 -n kubernetes-upgrade-20220725163448-14919
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220725163448-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220725163448-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.776509677s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220725163448-14919 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220725163448-14919 describe pod storage-provisioner: exit status 1 (49.154357ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220725163448-14919 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220725163448-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220725163448-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220725163448-14919: (3.102469682s)
--- FAIL: TestKubernetesUpgrade (322.34s)

                                                
                                    
x
+
TestMissingContainerUpgrade (48.24s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3494734602.exe start -p missing-upgrade-20220725163400-14919 --memory=2200 --driver=docker 
E0725 16:34:13.692272   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3494734602.exe start -p missing-upgrade-20220725163400-14919 --memory=2200 --driver=docker : exit status 78 (33.785235882s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220725163400-14919] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220725163400-14919
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220725163400-14919" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.66 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 66.66 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 98.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 136.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 243.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 307.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 401.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 520.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:34:15.562401246 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220725163400-14919" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:34:32.625401825 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3494734602.exe start -p missing-upgrade-20220725163400-14919 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3494734602.exe start -p missing-upgrade-20220725163400-14919 --memory=2200 --driver=docker : exit status 70 (4.28055s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220725163400-14919] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220725163400-14919
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220725163400-14919" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3494734602.exe start -p missing-upgrade-20220725163400-14919 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3494734602.exe start -p missing-upgrade-20220725163400-14919 --memory=2200 --driver=docker : exit status 70 (4.166958962s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220725163400-14919] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220725163400-14919
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220725163400-14919" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-07-25 16:34:45.452514 -0700 PDT m=+2540.420801523
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220725163400-14919
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220725163400-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cda033091198aa68ad48830a8bc0623826cec8d7e1119496f46b6c5a77c9f72e",
	        "Created": "2022-07-25T23:34:23.775169222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 144037,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:34:24.012211643Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/cda033091198aa68ad48830a8bc0623826cec8d7e1119496f46b6c5a77c9f72e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cda033091198aa68ad48830a8bc0623826cec8d7e1119496f46b6c5a77c9f72e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cda033091198aa68ad48830a8bc0623826cec8d7e1119496f46b6c5a77c9f72e/hosts",
	        "LogPath": "/var/lib/docker/containers/cda033091198aa68ad48830a8bc0623826cec8d7e1119496f46b6c5a77c9f72e/cda033091198aa68ad48830a8bc0623826cec8d7e1119496f46b6c5a77c9f72e-json.log",
	        "Name": "/missing-upgrade-20220725163400-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220725163400-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e9fd133a8dac41fee9ee65a255585448e32dba3c75a423da26b560d7bd031d86-init/diff:/var/lib/docker/overlay2/974d823892b89bae092eccadf060a8c6aed1f7fb0a6093743c321409323058b9/diff:/var/lib/docker/overlay2/ced73de6c3f1fdf98deb4630ebf2474d8f74baa5cd26fdb3d9decef060ae6f74/diff:/var/lib/docker/overlay2/c8f60c36f08254a27408c8f766a1326e9886fbd11aaa7587071af2858637f918/diff:/var/lib/docker/overlay2/3018fdda1859c2de0fd5f338b142de6d798ea38ea06617ed746551538735d335/diff:/var/lib/docker/overlay2/9946a21a7825b5cc6c2e9de80a91755fb86e38729b7a62630141715bf109ade3/diff:/var/lib/docker/overlay2/aadbee40fb42ec5693023d561580ab07ee91c1ff8fad55cd0b79c16ce3adf4f7/diff:/var/lib/docker/overlay2/9f90f677f177db8b6a6587f4e54932b32d53c84882f0548ebc1aabe213cf7d25/diff:/var/lib/docker/overlay2/5986a5e59db7cab26b1709feb2e5f832a621bb1907628146cdb24b4c29fbc5c4/diff:/var/lib/docker/overlay2/430cc152ab6e35ab72dd5ec1e43b1880a9e5a6804d878696333ca9ef2ae18114/diff:/var/lib/docker/overlay2/7bf3e9
07040cf03ff17daa64cad8b0825603e78921b6f5f9e981b8cdf71a65c4/diff:/var/lib/docker/overlay2/c66506223dac7f0cd80d3730bcdd87c1acf681ac8c34154d5b998177a17d2905/diff:/var/lib/docker/overlay2/a8ce9f864f358efb38080d249efdc38e27f7e5f080364f951a2cba55eba02bc4/diff:/var/lib/docker/overlay2/c86adef54e98a8919440d996890121f850adbc8815e87833ee6aae81a8620ca6/diff:/var/lib/docker/overlay2/8f67672e6507f0dd5cb0f415542f261d340a8a6784d327bc92210628f964503a/diff:/var/lib/docker/overlay2/6ce94ba6472679bd3bcd9c8564cd354ec35b5ccc2c7dbdd2a3d9336cdf43e6a4/diff:/var/lib/docker/overlay2/87b56923b36d8d20bb4154d81f9f8e7cb3d8aeaef5a496351341cc2320d706f3/diff:/var/lib/docker/overlay2/aacb33a6c5a16310153c98cb29a9c43978a237ddb7f33a91e3077c999185a519/diff:/var/lib/docker/overlay2/9200066cea73e4a5113439bfa175043a8b14d43b8ef508830693d9c56acabf08/diff:/var/lib/docker/overlay2/94d96ed7ad2ad6af98e5bd2e03d9f8c7f588ee9c13972ffb85190455f2a9c179/diff:/var/lib/docker/overlay2/050dff19d196127eaa7380bbf6e957d58b901e0e8713b88c51eed27d905cb323/diff:/var/lib/d
ocker/overlay2/d9c7b17075d136dd7e1bb1f6f2f1a6da63d216b07f790834109cc7fcedd1658d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e9fd133a8dac41fee9ee65a255585448e32dba3c75a423da26b560d7bd031d86/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e9fd133a8dac41fee9ee65a255585448e32dba3c75a423da26b560d7bd031d86/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e9fd133a8dac41fee9ee65a255585448e32dba3c75a423da26b560d7bd031d86/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220725163400-14919",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220725163400-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220725163400-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220725163400-14919",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220725163400-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "febd9de708440411fea8f7b18bd8f505626be001053b9752d46b05db6c8d661f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63102"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63103"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63104"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/febd9de70844",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "1a6cc1b16372f7831c43a8d29d950465f6c00979be654cf3fb7ad43d1592977d",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "9c1877d79117e9d87cbef2330fcadbcdf13c1df54be8168e15d84918d970d7bf",
	                    "EndpointID": "1a6cc1b16372f7831c43a8d29d950465f6c00979be654cf3fb7ad43d1592977d",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220725163400-14919 -n missing-upgrade-20220725163400-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220725163400-14919 -n missing-upgrade-20220725163400-14919: exit status 6 (432.368913ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:34:45.946282   26331 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220725163400-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220725163400-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220725163400-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220725163400-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220725163400-14919: (2.457197179s)
--- FAIL: TestMissingContainerUpgrade (48.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (45.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1313234498.exe start -p stopped-upgrade-20220725163620-14919 --memory=2200 --vm-driver=docker 
E0725 16:36:40.853604   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1313234498.exe start -p stopped-upgrade-20220725163620-14919 --memory=2200 --vm-driver=docker : exit status 70 (33.379601729s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220725163620-14919] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig783901386
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:36:36.096057946 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220725163620-14919" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:36:52.714974493 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220725163620-14919", then "minikube start -p stopped-upgrade-20220725163620-14919 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 13.27 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 95.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 136.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 287.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 367.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 424.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 466.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:36:52.714974493 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1313234498.exe start -p stopped-upgrade-20220725163620-14919 --memory=2200 --vm-driver=docker 
E0725 16:36:55.924084   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1313234498.exe start -p stopped-upgrade-20220725163620-14919 --memory=2200 --vm-driver=docker : exit status 70 (4.753225773s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220725163620-14919] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig912859312
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220725163620-14919" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1313234498.exe start -p stopped-upgrade-20220725163620-14919 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1313234498.exe start -p stopped-upgrade-20220725163620-14919 --memory=2200 --vm-driver=docker : exit status 70 (4.723386698s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220725163620-14919] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1177404957
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220725163620-14919" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (45.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (62.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220725163713-14919 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220725163713-14919 --output=json --layout=cluster: exit status 2 (16.107154549s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220725163713-14919","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220725163713-14919","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220725163713-14919
helpers_test.go:235: (dbg) docker inspect pause-20220725163713-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "614cce9e35c81eccf50ea7955e1547cd027ab58c248bd0bb3b4ee6ef13cb3f74",
	        "Created": "2022-07-25T23:37:20.622787372Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 155152,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:37:20.922857011Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/614cce9e35c81eccf50ea7955e1547cd027ab58c248bd0bb3b4ee6ef13cb3f74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/614cce9e35c81eccf50ea7955e1547cd027ab58c248bd0bb3b4ee6ef13cb3f74/hostname",
	        "HostsPath": "/var/lib/docker/containers/614cce9e35c81eccf50ea7955e1547cd027ab58c248bd0bb3b4ee6ef13cb3f74/hosts",
	        "LogPath": "/var/lib/docker/containers/614cce9e35c81eccf50ea7955e1547cd027ab58c248bd0bb3b4ee6ef13cb3f74/614cce9e35c81eccf50ea7955e1547cd027ab58c248bd0bb3b4ee6ef13cb3f74-json.log",
	        "Name": "/pause-20220725163713-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220725163713-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220725163713-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2f147a7191148740691e9faa205f6469be362bf1b0ed90e947c38b1e242a2014-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f147a7191148740691e9faa205f6469be362bf1b0ed90e947c38b1e242a2014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f147a7191148740691e9faa205f6469be362bf1b0ed90e947c38b1e242a2014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f147a7191148740691e9faa205f6469be362bf1b0ed90e947c38b1e242a2014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220725163713-14919",
	                "Source": "/var/lib/docker/volumes/pause-20220725163713-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220725163713-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220725163713-14919",
	                "name.minikube.sigs.k8s.io": "pause-20220725163713-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8e3146fd1ffb1ab6323232d078562008d3d139df969f1850da2035a467d5de6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63804"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63805"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63807"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63808"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f8e3146fd1ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220725163713-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "614cce9e35c8",
	                        "pause-20220725163713-14919"
	                    ],
	                    "NetworkID": "43bbbb3c6fbba08b9432fc180e4e367e65c9619f3a53ac4a1504fa2843f909ee",
	                    "EndpointID": "fcc00083ab7e899d72fbdc873676e2d285bb042b099aa5bd7d80d85b8d1f4cd1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220725163713-14919 -n pause-20220725163713-14919

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220725163713-14919 -n pause-20220725163713-14919: exit status 2 (16.128657531s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220725163713-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220725163713-14919 logs -n 25: (13.670328247s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                      | force-systemd-flag-20220725163137-14919 | jenkins | v1.26.0 | 25 Jul 22 16:31 PDT | 25 Jul 22 16:32 PDT |
	|         | force-systemd-flag-20220725163137-14919 |                                         |         |         |                     |                     |
	|         | --memory=2048 --force-systemd           |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker  |                                         |         |         |                     |                     |
	| ssh     | force-systemd-env-20220725163108-14919  | force-systemd-env-20220725163108-14919  | jenkins | v1.26.0 | 25 Jul 22 16:31 PDT | 25 Jul 22 16:31 PDT |
	|         | ssh docker info --format                |                                         |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                     |                     |
	| delete  | -p                                      | force-systemd-env-20220725163108-14919  | jenkins | v1.26.0 | 25 Jul 22 16:31 PDT | 25 Jul 22 16:31 PDT |
	|         | force-systemd-env-20220725163108-14919  |                                         |         |         |                     |                     |
	| start   | -p                                      | docker-flags-20220725163143-14919       | jenkins | v1.26.0 | 25 Jul 22 16:31 PDT | 25 Jul 22 16:32 PDT |
	|         | docker-flags-20220725163143-14919       |                                         |         |         |                     |                     |
	|         | --cache-images=false                    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=false --docker-env=FOO=BAR       |                                         |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                    |                                         |         |         |                     |                     |
	|         | --docker-opt=debug                      |                                         |         |         |                     |                     |
	|         | --docker-opt=icc=true                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | force-systemd-flag-20220725163137-14919 | force-systemd-flag-20220725163137-14919 | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | ssh docker info --format                |                                         |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                     |                     |
	| delete  | -p                                      | force-systemd-flag-20220725163137-14919 | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | force-systemd-flag-20220725163137-14919 |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220725163211-14919    | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | cert-expiration-20220725163211-14919    |                                         |         |         |                     |                     |
	|         | --memory=2048 --cert-expiration=3m      |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220725163143-14919       | docker-flags-20220725163143-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=Environment --no-pager       |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220725163143-14919       | docker-flags-20220725163143-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=ExecStart --no-pager         |                                         |         |         |                     |                     |
	| delete  | -p                                      | docker-flags-20220725163143-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | docker-flags-20220725163143-14919       |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-options-20220725163217-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | cert-options-20220725163217-14919       |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |         |                     |                     |
	|         | --apiserver-names=localhost             |                                         |         |         |                     |                     |
	|         | --apiserver-names=www.google.com        |                                         |         |         |                     |                     |
	|         | --apiserver-port=8555                   |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|         | --apiserver-name=localhost              |                                         |         |         |                     |                     |
	| ssh     | cert-options-20220725163217-14919       | cert-options-20220725163217-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |         |                     |                     |
	| ssh     | -p                                      | cert-options-20220725163217-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | cert-options-20220725163217-14919       |                                         |         |         |                     |                     |
	|         | -- sudo cat                             |                                         |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-options-20220725163217-14919       | jenkins | v1.26.0 | 25 Jul 22 16:32 PDT | 25 Jul 22 16:32 PDT |
	|         | cert-options-20220725163217-14919       |                                         |         |         |                     |                     |
	| delete  | -p                                      | running-upgrade-20220725163251-14919    | jenkins | v1.26.0 | 25 Jul 22 16:33 PDT | 25 Jul 22 16:34 PDT |
	|         | running-upgrade-20220725163251-14919    |                                         |         |         |                     |                     |
	| delete  | -p                                      | missing-upgrade-20220725163400-14919    | jenkins | v1.26.0 | 25 Jul 22 16:34 PDT | 25 Jul 22 16:34 PDT |
	|         | missing-upgrade-20220725163400-14919    |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725163448-14919 | jenkins | v1.26.0 | 25 Jul 22 16:34 PDT |                     |
	|         | kubernetes-upgrade-20220725163448-14919 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220725163211-14919    | jenkins | v1.26.0 | 25 Jul 22 16:35 PDT | 25 Jul 22 16:36 PDT |
	|         | cert-expiration-20220725163211-14919    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-expiration-20220725163211-14919    | jenkins | v1.26.0 | 25 Jul 22 16:36 PDT | 25 Jul 22 16:36 PDT |
	|         | cert-expiration-20220725163211-14919    |                                         |         |         |                     |                     |
	| delete  | -p                                      | stopped-upgrade-20220725163620-14919    | jenkins | v1.26.0 | 25 Jul 22 16:37 PDT | 25 Jul 22 16:37 PDT |
	|         | stopped-upgrade-20220725163620-14919    |                                         |         |         |                     |                     |
	| start   | -p pause-20220725163713-14919           | pause-20220725163713-14919              | jenkins | v1.26.0 | 25 Jul 22 16:37 PDT | 25 Jul 22 16:37 PDT |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=all --driver=docker              |                                         |         |         |                     |                     |
	| start   | -p pause-20220725163713-14919           | pause-20220725163713-14919              | jenkins | v1.26.0 | 25 Jul 22 16:37 PDT | 25 Jul 22 16:38 PDT |
	|         | --alsologtostderr -v=1                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| pause   | -p pause-20220725163713-14919           | pause-20220725163713-14919              | jenkins | v1.26.0 | 25 Jul 22 16:38 PDT | 25 Jul 22 16:38 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| stop    | -p                                      | kubernetes-upgrade-20220725163448-14919 | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT | 25 Jul 22 16:39 PDT |
	|         | kubernetes-upgrade-20220725163448-14919 |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725163448-14919 | jenkins | v1.26.0 | 25 Jul 22 16:39 PDT |                     |
	|         | kubernetes-upgrade-20220725163448-14919 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 16:39:04
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 16:39:04.388659   27339 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:39:04.388812   27339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:39:04.388817   27339 out.go:309] Setting ErrFile to fd 2...
	I0725 16:39:04.388821   27339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:39:04.389002   27339 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:39:04.389445   27339 out.go:303] Setting JSON to false
	I0725 16:39:04.404297   27339 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9267,"bootTime":1658783077,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:39:04.404372   27339 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:39:04.427307   27339 out.go:177] * [kubernetes-upgrade-20220725163448-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:39:04.470004   27339 notify.go:193] Checking for updates...
	I0725 16:39:04.491786   27339 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:39:04.513677   27339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:39:04.534941   27339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:39:04.557016   27339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:39:04.578835   27339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:39:04.601268   27339 config.go:178] Loaded profile config "kubernetes-upgrade-20220725163448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:39:04.601942   27339 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:39:04.671939   27339 docker.go:137] docker version: linux-20.10.17
	I0725 16:39:04.672080   27339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:39:04.803372   27339 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:39:04.748595826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:39:04.846977   27339 out.go:177] * Using the docker driver based on existing profile
	I0725 16:39:04.868204   27339 start.go:284] selected driver: docker
	I0725 16:39:04.868236   27339 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220725163448-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220725163448-
14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath:}
	I0725 16:39:04.868375   27339 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:39:04.871819   27339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:39:05.005008   27339 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:39:04.950396573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:39:05.005169   27339 cni.go:95] Creating CNI manager for ""
	I0725 16:39:05.005184   27339 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:39:05.005192   27339 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220725163448-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220725163448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:39:05.048385   27339 out.go:177] * Starting control plane node kubernetes-upgrade-20220725163448-14919 in cluster kubernetes-upgrade-20220725163448-14919
	I0725 16:39:05.070644   27339 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:39:05.092590   27339 out.go:177] * Pulling base image ...
	I0725 16:39:05.135646   27339 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:39:05.135657   27339 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:39:05.135722   27339 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 16:39:05.135742   27339 cache.go:57] Caching tarball of preloaded images
	I0725 16:39:05.135938   27339 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:39:05.135962   27339 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 16:39:05.137052   27339 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/config.json ...
	I0725 16:39:05.201249   27339 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:39:05.201282   27339 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:39:05.201294   27339 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:39:05.201372   27339 start.go:370] acquiring machines lock for kubernetes-upgrade-20220725163448-14919: {Name:mk334774c1af85cfaf9247ebfdb50be9350cdeb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:39:05.201469   27339 start.go:374] acquired machines lock for "kubernetes-upgrade-20220725163448-14919" in 72.429µs
	I0725 16:39:05.201488   27339 start.go:95] Skipping create...Using existing machine configuration
	I0725 16:39:05.201498   27339 fix.go:55] fixHost starting: 
	I0725 16:39:05.201719   27339 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Status}}
	I0725 16:39:05.268734   27339 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220725163448-14919: state=Stopped err=<nil>
	W0725 16:39:05.268761   27339 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 16:39:05.312290   27339 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220725163448-14919" ...
	I0725 16:39:05.333706   27339 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220725163448-14919
	I0725 16:39:05.673512   27339 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725163448-14919 --format={{.State.Status}}
	I0725 16:39:05.759991   27339 kic.go:415] container "kubernetes-upgrade-20220725163448-14919" state is running.
	I0725 16:39:05.760598   27339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:05.844110   27339 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubernetes-upgrade-20220725163448-14919/config.json ...
	I0725 16:39:05.844708   27339 machine.go:88] provisioning docker machine ...
	I0725 16:39:05.844737   27339 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220725163448-14919"
	I0725 16:39:05.844954   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:05.924364   27339 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:05.924579   27339 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64036 <nil> <nil>}
	I0725 16:39:05.924602   27339 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220725163448-14919 && echo "kubernetes-upgrade-20220725163448-14919" | sudo tee /etc/hostname
	I0725 16:39:06.052671   27339 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220725163448-14919
	
	I0725 16:39:06.052755   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:06.128625   27339 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:06.128768   27339 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64036 <nil> <nil>}
	I0725 16:39:06.128784   27339 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220725163448-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220725163448-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220725163448-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:39:06.250062   27339 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:39:06.250081   27339 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:39:06.250107   27339 ubuntu.go:177] setting up certificates
	I0725 16:39:06.250117   27339 provision.go:83] configureAuth start
	I0725 16:39:06.250188   27339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:06.320989   27339 provision.go:138] copyHostCerts
	I0725 16:39:06.321092   27339 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:39:06.321102   27339 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:39:06.321201   27339 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:39:06.321418   27339 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:39:06.321430   27339 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:39:06.321492   27339 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:39:06.321668   27339 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:39:06.321679   27339 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:39:06.321740   27339 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:39:06.321863   27339 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220725163448-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220725163448-14919]
	I0725 16:39:06.566174   27339 provision.go:172] copyRemoteCerts
	I0725 16:39:06.566241   27339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:39:06.566297   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:06.642485   27339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64036 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:39:06.731291   27339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:39:06.748874   27339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0725 16:39:06.766021   27339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 16:39:06.782970   27339 provision.go:86] duration metric: configureAuth took 532.821916ms
	I0725 16:39:06.783004   27339 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:39:06.783227   27339 config.go:178] Loaded profile config "kubernetes-upgrade-20220725163448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:39:06.783312   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:06.859500   27339 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:06.859662   27339 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64036 <nil> <nil>}
	I0725 16:39:06.859673   27339 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:39:06.981714   27339 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:39:06.981733   27339 ubuntu.go:71] root file system type: overlay
	I0725 16:39:06.981850   27339 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:39:06.981923   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:07.073949   27339 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:07.074114   27339 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64036 <nil> <nil>}
	I0725 16:39:07.074173   27339 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:39:07.207264   27339 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:39:07.207340   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:07.278785   27339 main.go:134] libmachine: Using SSH client type: native
	I0725 16:39:07.278960   27339 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 64036 <nil> <nil>}
	I0725 16:39:07.278974   27339 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:39:07.407895   27339 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:39:07.407914   27339 machine.go:91] provisioned docker machine in 1.563180532s
	I0725 16:39:07.407925   27339 start.go:307] post-start starting for "kubernetes-upgrade-20220725163448-14919" (driver="docker")
	I0725 16:39:07.407931   27339 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:39:07.408000   27339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:39:07.408045   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:07.479995   27339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64036 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:39:07.568707   27339 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:39:07.572204   27339 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:39:07.572224   27339 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:39:07.572231   27339 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:39:07.572236   27339 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:39:07.572247   27339 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:39:07.572361   27339 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:39:07.572494   27339 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:39:07.572635   27339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:39:07.579698   27339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:39:07.596269   27339 start.go:310] post-start completed in 188.326693ms
	I0725 16:39:07.596337   27339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:39:07.596392   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:07.668401   27339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64036 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:39:07.752694   27339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:39:07.757027   27339 fix.go:57] fixHost completed within 2.555500959s
	I0725 16:39:07.757039   27339 start.go:82] releasing machines lock for "kubernetes-upgrade-20220725163448-14919", held for 2.555538277s
	I0725 16:39:07.757128   27339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:07.831332   27339 ssh_runner.go:195] Run: systemctl --version
	I0725 16:39:07.831332   27339 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:39:07.831408   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:07.831435   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:07.912950   27339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64036 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:39:07.914790   27339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64036 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/kubernetes-upgrade-20220725163448-14919/id_rsa Username:docker}
	I0725 16:39:08.217945   27339 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:39:08.227433   27339 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:39:08.227491   27339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:39:08.238841   27339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:39:08.250933   27339 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:39:08.322814   27339 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:39:08.391298   27339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:39:08.465137   27339 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:39:08.665426   27339 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 16:39:08.736800   27339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:39:08.809714   27339 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 16:39:08.820437   27339 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 16:39:08.820505   27339 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 16:39:08.824206   27339 start.go:471] Will wait 60s for crictl version
	I0725 16:39:08.824253   27339 ssh_runner.go:195] Run: sudo crictl version
	I0725 16:39:08.920457   27339 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 16:39:08.920542   27339 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:39:08.954772   27339 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:39:09.033636   27339 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 16:39:09.033846   27339 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220725163448-14919 dig +short host.docker.internal
	I0725 16:39:09.166735   27339 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:39:09.166863   27339 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:39:09.171108   27339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:39:09.180487   27339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725163448-14919
	I0725 16:39:09.253878   27339 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:39:09.253944   27339 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:39:09.284088   27339 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:39:09.284101   27339 docker.go:617] k8s.gcr.io/kube-apiserver:v1.24.3 wasn't preloaded
	I0725 16:39:09.284142   27339 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 16:39:09.292411   27339 ssh_runner.go:195] Run: which lz4
	I0725 16:39:09.296397   27339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 16:39:09.300276   27339 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0725 16:39:09.300301   27339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (425458757 bytes)
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:37:21 UTC, end at Mon 2022-07-25 23:39:14 UTC. --
	Jul 25 23:38:11 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:11.677439201Z" level=info msg="Loading containers: start."
	Jul 25 23:38:11 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:11.765676925Z" level=info msg="ignoring event" container=9de9707fdfe8ddd5ef15a97ee93e7b8aca530e19b73b3e8204b1c85ea93e1953 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:38:11 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:11.771646864Z" level=info msg="ignoring event" container=659612f19a6aaa32c703c03f152d36fc99ecc459a5b841ef5fa9a26f8e5c89f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:38:11 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:11.833348281Z" level=info msg="ignoring event" container=bb2097e47a54e844b6548c2222980ef60b5df813c04c19e9e7ffa0a625d5deb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:38:11 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:11.833395163Z" level=info msg="ignoring event" container=307036f6ab8f4eb416ae7a582a7edafa21b40dc2e0b4161dedcfff699f147c55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:38:11 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:11.833440662Z" level=info msg="ignoring event" container=740540cf694c3921f4d886bc18c3335108fcf77fa7bd99ad78735fcece97d3f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:38:11 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:11.837057021Z" level=info msg="ignoring event" container=0755b42f2226a3e5635c41129a913b0792e2515c9b64bb3337aaa6840a9d954e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:38:11 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:11.840353251Z" level=info msg="ignoring event" container=4a0fe83d1d27f8d1eec004da7ca52455707b46d87223f9d32f222b76c1c8759c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.230711567Z" level=info msg="ignoring event" container=22c26e8e5a4ff24eb171edd3c4e509f99a0927e3388f098363cd6ac80c4c8519 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.384811970Z" level=info msg="Removing stale sandbox 14e3fc29ea7689f5803cb11bb31c4869cfdabe531adf86030183f111c8f3fc2b (4a0fe83d1d27f8d1eec004da7ca52455707b46d87223f9d32f222b76c1c8759c)"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.388727853Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint aeed3ba28f0f755198c9464e70f0b0cd326e6ac8028b58e659cd055af0e3acc0 4b31b729a74e0f2bebf2c6925da53c672e5bbd097b2ff40b6b7d681007063d8c], retrying...."
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.474647464Z" level=info msg="Removing stale sandbox 18b95778655bdd1009600423147faf034ece4400a9d9bdadde334c417a12576c (659612f19a6aaa32c703c03f152d36fc99ecc459a5b841ef5fa9a26f8e5c89f2)"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.475938307Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bc01f8573f2894d089b3187b5f5ac079c93d1bca11684befd72afac3e7bed887 eb71aa1d5ec55b7af4942b14c1968e08945152ee3bea58d07dd0b82e76a424ec], retrying...."
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.561492699Z" level=info msg="Removing stale sandbox 2c18cd4cbd3d0a42ce2e7a44af279fc59aac527d465a14a962bdd32788d4d412 (0755b42f2226a3e5635c41129a913b0792e2515c9b64bb3337aaa6840a9d954e)"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.562797823Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bc01f8573f2894d089b3187b5f5ac079c93d1bca11684befd72afac3e7bed887 ab84bb2e3aeb4dd93d752daa89cf011bfc4623a80c2fea1890457703af0456a6], retrying...."
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.648447271Z" level=info msg="Removing stale sandbox e468ba637c200625e3b3b11433c1053f258eab59621919242dc1f92068b2e1b4 (9de9707fdfe8ddd5ef15a97ee93e7b8aca530e19b73b3e8204b1c85ea93e1953)"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.649766138Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bc01f8573f2894d089b3187b5f5ac079c93d1bca11684befd72afac3e7bed887 6f9bb959ec1f9717dbcf705bedfc713a5f5e6e55c9daa515f84ef82a432e198e], retrying...."
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.672308160Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.708117025Z" level=info msg="Loading containers: done."
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.717278608Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.717415582Z" level=info msg="Daemon has completed initialization"
	Jul 25 23:38:17 pause-20220725163713-14919 systemd[1]: Started Docker Application Container Engine.
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.738997906Z" level=info msg="API listen on [::]:2376"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.744216579Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 23:38:17 pause-20220725163713-14919 dockerd[3822]: time="2022-07-25T23:38:17.935231204Z" level=error msg="Failed to compute size of container rootfs 7de11f8228226c60db125d5a201812ae3231025bdd4625ddb009332830cc70f4: mount does not exist"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f8de4af1d1edd       6e38f40d628db       42 seconds ago       Running             storage-provisioner       0                   f5710c79c3708
	4d888c6a317f9       aebe758cef4cd       49 seconds ago       Running             etcd                      2                   5d56ec9515a7e
	40b2cbb0f4a53       586c112956dfc       52 seconds ago       Running             kube-controller-manager   2                   c45954cc9f240
	9a11cd0c4faa9       a4ca41631cc7a       56 seconds ago       Running             coredns                   2                   2334df57e2106
	ea6f85ddced2d       3a5aa3a515f5d       56 seconds ago       Running             kube-scheduler            2                   8201e2e03b321
	d314dcdb5a061       d521dd763e2e3       56 seconds ago       Running             kube-apiserver            1                   9e19a049636c4
	bc1b23ded438e       2ae1ba6417cbc       56 seconds ago       Running             kube-proxy                1                   75ce8c440c671
	22c26e8e5a4ff       a4ca41631cc7a       About a minute ago   Exited              coredns                   1                   4a0fe83d1d27f
	740540cf694c3       586c112956dfc       About a minute ago   Exited              kube-controller-manager   1                   0755b42f2226a
	307036f6ab8f4       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            1                   659612f19a6aa
	bb2097e47a54e       aebe758cef4cd       About a minute ago   Exited              etcd                      1                   9de9707fdfe8d
	56d6dbd9d04b7       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                0                   7f68855ee1556
	7cfad7e54da61       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            0                   a24eb983a2c5a
	
	* 
	* ==> coredns [22c26e8e5a4f] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [9a11cd0c4faa] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001693] FS-Cache: O-key=[8] '4b21e30300000000'
	[  +0.001143] FS-Cache: N-cookie c=00000000a437b4cf [p=0000000011de058a fl=2 nc=0 na=1]
	[  +0.001815] FS-Cache: N-cookie d=00000000f8b35d0a n=00000000f753d2b0
	[  +0.001563] FS-Cache: N-key=[8] '4b21e30300000000'
	[  +0.002293] FS-Cache: Duplicate cookie detected
	[  +0.001068] FS-Cache: O-cookie c=000000005465244a [p=0000000011de058a fl=226 nc=0 na=1]
	[  +0.002019] FS-Cache: O-cookie d=00000000f8b35d0a n=0000000034ab7a4f
	[  +0.001717] FS-Cache: O-key=[8] '4b21e30300000000'
	[  +0.001378] FS-Cache: N-cookie c=00000000a437b4cf [p=0000000011de058a fl=2 nc=0 na=1]
	[  +0.002284] FS-Cache: N-cookie d=00000000f8b35d0a n=000000007b4e0c1a
	[  +0.001579] FS-Cache: N-key=[8] '4b21e30300000000'
	[  +4.146243] FS-Cache: Duplicate cookie detected
	[  +0.001146] FS-Cache: O-cookie c=000000007cbfde7b [p=0000000011de058a fl=226 nc=0 na=1]
	[  +0.001777] FS-Cache: O-cookie d=00000000f8b35d0a n=0000000086804b65
	[  +0.001765] FS-Cache: O-key=[8] '4a21e30300000000'
	[  +0.001116] FS-Cache: N-cookie c=00000000a437b4cf [p=0000000011de058a fl=2 nc=0 na=1]
	[  +0.001777] FS-Cache: N-cookie d=00000000f8b35d0a n=000000007b4e0c1a
	[  +0.001461] FS-Cache: N-key=[8] '4a21e30300000000'
	[  +0.500906] FS-Cache: Duplicate cookie detected
	[  +0.001416] FS-Cache: O-cookie c=00000000d2bf1a30 [p=0000000011de058a fl=226 nc=0 na=1]
	[  +0.001824] FS-Cache: O-cookie d=00000000f8b35d0a n=000000006a798d13
	[  +0.001465] FS-Cache: O-key=[8] '5221e30300000000'
	[  +0.001130] FS-Cache: N-cookie c=00000000a437b4cf [p=0000000011de058a fl=2 nc=0 na=1]
	[  +0.001784] FS-Cache: N-cookie d=00000000f8b35d0a n=00000000e6dd0465
	[  +0.001456] FS-Cache: N-key=[8] '5221e30300000000'
	
	* 
	* ==> etcd [4d888c6a317f] <==
	* {"level":"info","ts":"2022-07-25T23:38:26.075Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-25T23:38:26.076Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-25T23:38:26.076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-25T23:38:26.076Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-25T23:38:26.076Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:38:26.076Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:38:26.077Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T23:38:26.077Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T23:38:26.077Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T23:38:26.077Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T23:38:26.077Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T23:38:27.571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-07-25T23:38:27.571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-07-25T23:38:27.572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-25T23:38:27.572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-07-25T23:38:27.572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-07-25T23:38:27.572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-07-25T23:38:27.572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-07-25T23:38:27.572Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:38:27.572Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220725163713-14919 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T23:38:27.572Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:38:27.573Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T23:38:27.574Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T23:38:27.575Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-25T23:38:27.576Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [bb2097e47a54] <==
	* {"level":"info","ts":"2022-07-25T23:38:02.906Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T23:38:02.906Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T23:38:02.906Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T23:38:04.202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-25T23:38:04.202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-25T23:38:04.202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-25T23:38:04.202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-07-25T23:38:04.202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-25T23:38:04.202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-07-25T23:38:04.202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-25T23:38:04.203Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220725163713-14919 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T23:38:04.203Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:38:04.203Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:38:04.203Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T23:38:04.203Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T23:38:04.204Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T23:38:04.204Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-25T23:38:11.714Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-25T23:38:11.715Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220725163713-14919","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/07/25 23:38:11 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/25 23:38:11 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-25T23:38:11.717Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-07-25T23:38:11.728Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T23:38:11.730Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T23:38:11.730Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220725163713-14919","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  23:39:25 up 46 min,  0 users,  load average: 0.81, 1.15, 0.92
	Linux pause-20220725163713-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [7cfad7e54da6] <==
	* I0725 23:38:04.581902       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0725 23:38:04.582210       1 controller.go:122] Shutting down OpenAPI controller
	I0725 23:38:04.582235       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0725 23:38:04.582248       1 controller.go:89] Shutting down OpenAPI AggregationController
	I0725 23:38:04.582258       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
	I0725 23:38:04.582266       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 23:38:04.582273       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0725 23:38:04.582279       1 available_controller.go:503] Shutting down AvailableConditionController
	I0725 23:38:04.582286       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0725 23:38:04.582296       1 apf_controller.go:326] Shutting down API Priority and Fairness config worker
	I0725 23:38:04.582328       1 establishing_controller.go:87] Shutting down EstablishingController
	I0725 23:38:04.582337       1 naming_controller.go:302] Shutting down NamingConditionController
	I0725 23:38:04.582427       1 controller.go:115] Shutting down OpenAPI V3 controller
	I0725 23:38:04.582211       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0725 23:38:04.582299       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0725 23:38:04.582305       1 customresource_discovery_controller.go:245] Shutting down DiscoveryController
	I0725 23:38:04.582380       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0725 23:38:04.582222       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0725 23:38:04.582559       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 23:38:04.582571       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0725 23:38:04.582617       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 23:38:04.582565       1 secure_serving.go:255] Stopped listening on [::]:8443
	I0725 23:38:04.582580       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 23:38:04.582789       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 23:38:04.583003       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	
	* 
	* ==> kube-apiserver [d314dcdb5a06] <==
	* I0725 23:38:29.430822       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0725 23:38:29.418026       1 controller.go:85] Starting OpenAPI controller
	I0725 23:38:29.418036       1 controller.go:85] Starting OpenAPI V3 controller
	I0725 23:38:29.418054       1 naming_controller.go:291] Starting NamingConditionController
	I0725 23:38:29.418061       1 establishing_controller.go:76] Starting EstablishingController
	I0725 23:38:29.418067       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0725 23:38:29.418074       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0725 23:38:29.418079       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0725 23:38:29.432084       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 23:38:29.440052       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 23:38:29.451395       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 23:38:29.461621       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0725 23:38:29.529094       1 cache.go:39] Caches are synced for autoregister controller
	I0725 23:38:29.529745       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 23:38:29.530568       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 23:38:29.531036       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 23:38:29.531117       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0725 23:38:29.531325       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 23:38:29.563489       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 23:38:30.190348       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 23:38:30.413565       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 23:38:31.647937       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 23:38:31.664233       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 23:38:31.671184       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 23:38:31.677077       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [40b2cbb0f4a5] <==
	* I0725 23:38:23.919346       1 serving.go:348] Generated self-signed cert in-memory
	I0725 23:38:24.189432       1 controllermanager.go:180] Version: v1.24.3
	I0725 23:38:24.189477       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:38:24.190499       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 23:38:24.190609       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 23:38:24.190667       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0725 23:38:24.190794       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 23:38:31.562812       1 shared_informer.go:255] Waiting for caches to sync for tokens
	I0725 23:38:31.564659       1 controllermanager.go:593] Started "cronjob"
	I0725 23:38:31.564797       1 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
	I0725 23:38:31.564817       1 shared_informer.go:255] Waiting for caches to sync for cronjob
	I0725 23:38:31.566202       1 node_ipam_controller.go:91] Sending events to api server.
	I0725 23:38:31.663561       1 shared_informer.go:262] Caches are synced for tokens
	
	* 
	* ==> kube-controller-manager [740540cf694c] <==
	* I0725 23:38:06.222137       1 serving.go:348] Generated self-signed cert in-memory
	I0725 23:38:06.573259       1 controllermanager.go:180] Version: v1.24.3
	I0725 23:38:06.573294       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:38:06.574122       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 23:38:06.574156       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0725 23:38:06.574166       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 23:38:06.574182       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-proxy [56d6dbd9d04b] <==
	* I0725 23:37:53.961791       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0725 23:37:53.961867       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0725 23:37:53.961888       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 23:37:53.986555       1 server_others.go:206] "Using iptables Proxier"
	I0725 23:37:53.986604       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 23:37:53.986613       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 23:37:53.986624       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 23:37:53.986771       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 23:37:53.987006       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 23:37:53.987367       1 server.go:661] "Version info" version="v1.24.3"
	I0725 23:37:53.987431       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:37:53.987890       1 config.go:444] "Starting node config controller"
	I0725 23:37:53.987941       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 23:37:53.988198       1 config.go:317] "Starting service config controller"
	I0725 23:37:53.988320       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 23:37:53.988255       1 config.go:226] "Starting endpoint slice config controller"
	I0725 23:37:53.988400       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 23:37:54.088668       1 shared_informer.go:262] Caches are synced for service config
	I0725 23:37:54.088753       1 shared_informer.go:262] Caches are synced for node config
	I0725 23:37:54.088764       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [bc1b23ded438] <==
	* E0725 23:38:18.564281       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220725163713-14919": dial tcp 192.168.67.2:8443: connect: connection refused
	I0725 23:38:29.445719       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0725 23:38:29.445926       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0725 23:38:29.446080       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 23:38:29.556647       1 server_others.go:206] "Using iptables Proxier"
	I0725 23:38:29.556689       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 23:38:29.556697       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 23:38:29.556706       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 23:38:29.556735       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 23:38:29.556863       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 23:38:29.557117       1 server.go:661] "Version info" version="v1.24.3"
	I0725 23:38:29.557124       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:38:29.562582       1 config.go:444] "Starting node config controller"
	I0725 23:38:29.562605       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 23:38:29.563031       1 config.go:226] "Starting endpoint slice config controller"
	I0725 23:38:29.563061       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 23:38:29.560939       1 config.go:317] "Starting service config controller"
	I0725 23:38:29.563075       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 23:38:29.663587       1 shared_informer.go:262] Caches are synced for service config
	I0725 23:38:29.663813       1 shared_informer.go:262] Caches are synced for node config
	I0725 23:38:29.663640       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [307036f6ab8f] <==
	* W0725 23:38:08.325099       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:08.325183       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:08.406763       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:08.406861       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:08.673376       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:08.673467       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:08.680811       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:08.680894       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:08.739535       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:08.739752       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:08.926274       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:08.926357       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:09.012775       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.67.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:09.012856       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.67.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:10.872797       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:10.872823       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:11.396613       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:11.396765       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0725 23:38:11.485075       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0725 23:38:11.485176       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	I0725 23:38:11.734104       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0725 23:38:11.734153       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0725 23:38:11.734207       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 23:38:11.734216       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0725 23:38:11.734640       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [ea6f85ddced2] <==
	* I0725 23:38:19.342140       1 serving.go:348] Generated self-signed cert in-memory
	I0725 23:38:29.462904       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0725 23:38:29.462940       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:38:29.535171       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 23:38:29.535272       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0725 23:38:29.535301       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0725 23:38:29.535316       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 23:38:29.537081       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 23:38:29.537109       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 23:38:29.537125       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0725 23:38:29.537128       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 23:38:29.635452       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0725 23:38:29.637919       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 23:38:29.637960       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:37:21 UTC, end at Mon 2022-07-25 23:39:26 UTC. --
	Jul 25 23:38:17 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:17.937383    1888 scope.go:110] "RemoveContainer" containerID="e9b8eecaf9fa8c4b5562d570d8fe55834a050e4777e4e1823a2a623a5531e446"
	Jul 25 23:38:17 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:17.948467    1888 scope.go:110] "RemoveContainer" containerID="f334466f1476294262ef882d944b529053c283a8dc46f5d0eb6de2b2fced9055"
	Jul 25 23:38:18 pause-20220725163713-14919 kubelet[1888]: E0725 23:38:18.571781    1888 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-20220725163713-14919_kube-system(c0f478559beb32d3be70ff33c83f130d)\"" pod="kube-system/kube-controller-manager-pause-20220725163713-14919" podUID=c0f478559beb32d3be70ff33c83f130d
	Jul 25 23:38:18 pause-20220725163713-14919 kubelet[1888]: E0725 23:38:18.628067    1888 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-pause-20220725163713-14919_kube-system(b890ddd436cdac535f6ee1c3f77d0b8c)\"" pod="kube-system/etcd-pause-20220725163713-14919" podUID=b890ddd436cdac535f6ee1c3f77d0b8c
	Jul 25 23:38:19 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:19.050036    1888 scope.go:110] "RemoveContainer" containerID="740540cf694c3921f4d886bc18c3335108fcf77fa7bd99ad78735fcece97d3f8"
	Jul 25 23:38:19 pause-20220725163713-14919 kubelet[1888]: E0725 23:38:19.050285    1888 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-20220725163713-14919_kube-system(c0f478559beb32d3be70ff33c83f130d)\"" pod="kube-system/kube-controller-manager-pause-20220725163713-14919" podUID=c0f478559beb32d3be70ff33c83f130d
	Jul 25 23:38:19 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:19.065824    1888 scope.go:110] "RemoveContainer" containerID="bb2097e47a54e844b6548c2222980ef60b5df813c04c19e9e7ffa0a625d5deb6"
	Jul 25 23:38:19 pause-20220725163713-14919 kubelet[1888]: E0725 23:38:19.066087    1888 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-pause-20220725163713-14919_kube-system(b890ddd436cdac535f6ee1c3f77d0b8c)\"" pod="kube-system/etcd-pause-20220725163713-14919" podUID=b890ddd436cdac535f6ee1c3f77d0b8c
	Jul 25 23:38:20 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:20.088353    1888 scope.go:110] "RemoveContainer" containerID="740540cf694c3921f4d886bc18c3335108fcf77fa7bd99ad78735fcece97d3f8"
	Jul 25 23:38:20 pause-20220725163713-14919 kubelet[1888]: E0725 23:38:20.088598    1888 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-20220725163713-14919_kube-system(c0f478559beb32d3be70ff33c83f130d)\"" pod="kube-system/kube-controller-manager-pause-20220725163713-14919" podUID=c0f478559beb32d3be70ff33c83f130d
	Jul 25 23:38:20 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:20.088785    1888 scope.go:110] "RemoveContainer" containerID="bb2097e47a54e844b6548c2222980ef60b5df813c04c19e9e7ffa0a625d5deb6"
	Jul 25 23:38:20 pause-20220725163713-14919 kubelet[1888]: E0725 23:38:20.088979    1888 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-pause-20220725163713-14919_kube-system(b890ddd436cdac535f6ee1c3f77d0b8c)\"" pod="kube-system/etcd-pause-20220725163713-14919" podUID=b890ddd436cdac535f6ee1c3f77d0b8c
	Jul 25 23:38:22 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:22.863739    1888 scope.go:110] "RemoveContainer" containerID="740540cf694c3921f4d886bc18c3335108fcf77fa7bd99ad78735fcece97d3f8"
	Jul 25 23:38:25 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:25.948088    1888 scope.go:110] "RemoveContainer" containerID="bb2097e47a54e844b6548c2222980ef60b5df813c04c19e9e7ffa0a625d5deb6"
	Jul 25 23:38:29 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:29.040344    1888 status_manager.go:664] "Failed to get status for pod" podUID=bb398eba-5649-4517-88b8-eb8e5182933a pod="kube-system/kube-proxy-bwrz4" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bwrz4\": net/http: TLS handshake timeout"
	Jul 25 23:38:31 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:31.689540    1888 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 23:38:31 pause-20220725163713-14919 kubelet[1888]: E0725 23:38:31.689630    1888 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4ba821db-f9df-4c13-a6cc-05d96467c93f" containerName="coredns"
	Jul 25 23:38:31 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:31.689657    1888 memory_manager.go:345] "RemoveStaleState removing state" podUID="4ba821db-f9df-4c13-a6cc-05d96467c93f" containerName="coredns"
	Jul 25 23:38:31 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:31.853881    1888 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58be12c2-65ea-498d-b0e2-5a9821ddd103-tmp\") pod \"storage-provisioner\" (UID: \"58be12c2-65ea-498d-b0e2-5a9821ddd103\") " pod="kube-system/storage-provisioner"
	Jul 25 23:38:31 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:31.853980    1888 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnzjp\" (UniqueName: \"kubernetes.io/projected/58be12c2-65ea-498d-b0e2-5a9821ddd103-kube-api-access-wnzjp\") pod \"storage-provisioner\" (UID: \"58be12c2-65ea-498d-b0e2-5a9821ddd103\") " pod="kube-system/storage-provisioner"
	Jul 25 23:38:40 pause-20220725163713-14919 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jul 25 23:38:40 pause-20220725163713-14919 kubelet[1888]: I0725 23:38:40.510485    1888 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jul 25 23:38:40 pause-20220725163713-14919 systemd[1]: kubelet.service: Succeeded.
	Jul 25 23:38:40 pause-20220725163713-14919 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 23:38:40 pause-20220725163713-14919 systemd[1]: kubelet.service: Consumed 1.864s CPU time.
	
	* 
	* ==> storage-provisioner [f8de4af1d1ed] <==
	* I0725 23:38:33.085325       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 23:38:33.095106       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 23:38:33.095132       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 23:38:33.104760       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 23:38:33.104996       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220725163713-14919_99d38ca0-0880-4e34-9ac1-f8ed3fd39b35!
	I0725 23:38:33.105384       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6326d6f-eb32-4354-a680-bcdca9b3f781", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220725163713-14919_99d38ca0-0880-4e34-9ac1-f8ed3fd39b35 became leader
	I0725 23:38:33.205960       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220725163713-14919_99d38ca0-0880-4e34-9ac1-f8ed3fd39b35!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:39:24.700719   27417 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220725163713-14919 -n pause-20220725163713-14919

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220725163713-14919 -n pause-20220725163713-14919: exit status 2 (16.205684635s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220725163713-14919" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (62.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (63.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0725 16:46:09.660692   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.125136629s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0725 16:46:19.901341   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110461411s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.107862788s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.102362021s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0725 16:46:40.383931   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.112223167s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0725 16:46:43.951920   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:43.957083   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:43.967489   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:43.987576   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:44.027909   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:44.109213   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:44.269978   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:44.590401   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:45.231156   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0725 16:46:46.512245   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:49.072945   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.109290626s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0725 16:46:54.195411   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:46:55.931392   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.10542301s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0725 16:47:04.437811   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.104957269s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (63.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (249.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220725164610-14919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0725 16:46:10.636193   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220725164610-14919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m9.392840184s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220725164610-14919] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-20220725164610-14919 in cluster old-k8s-version-20220725164610-14919
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:46:10.458578   29782 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:46:10.458756   29782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:46:10.458762   29782 out.go:309] Setting ErrFile to fd 2...
	I0725 16:46:10.458765   29782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:46:10.458878   29782 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:46:10.459414   29782 out.go:303] Setting JSON to false
	I0725 16:46:10.474380   29782 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9693,"bootTime":1658783077,"procs":358,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:46:10.474474   29782 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:46:10.496794   29782 out.go:177] * [old-k8s-version-20220725164610-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:46:10.518443   29782 notify.go:193] Checking for updates...
	I0725 16:46:10.539450   29782 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:46:10.561742   29782 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:46:10.583883   29782 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:46:10.605494   29782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:46:10.626841   29782 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:46:10.649600   29782 config.go:178] Loaded profile config "kubenet-20220725163045-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:46:10.649691   29782 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:46:10.719775   29782 docker.go:137] docker version: linux-20.10.17
	I0725 16:46:10.719903   29782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:46:10.860350   29782 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:46:10.788841674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:46:10.882432   29782 out.go:177] * Using the docker driver based on user configuration
	I0725 16:46:10.910394   29782 start.go:284] selected driver: docker
	I0725 16:46:10.910412   29782 start.go:808] validating driver "docker" against <nil>
	I0725 16:46:10.910427   29782 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:46:10.912497   29782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:46:11.046681   29782 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:46:10.973048849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:46:11.046950   29782 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 16:46:11.047418   29782 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:46:11.068158   29782 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 16:46:11.093711   29782 cni.go:95] Creating CNI manager for ""
	I0725 16:46:11.093792   29782 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:46:11.093813   29782 start_flags.go:310] config:
	{Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:46:11.115982   29782 out.go:177] * Starting control plane node old-k8s-version-20220725164610-14919 in cluster old-k8s-version-20220725164610-14919
	I0725 16:46:11.158866   29782 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:46:11.179889   29782 out.go:177] * Pulling base image ...
	I0725 16:46:11.200672   29782 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:46:11.200675   29782 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:46:11.200735   29782 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 16:46:11.200746   29782 cache.go:57] Caching tarball of preloaded images
	I0725 16:46:11.200867   29782 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:46:11.200885   29782 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 16:46:11.201651   29782 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json ...
	I0725 16:46:11.201768   29782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json: {Name:mk955f69d70cbcbb80d125cc6b9304ecc5ad65f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:46:11.264663   29782 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:46:11.264696   29782 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:46:11.264708   29782 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:46:11.264770   29782 start.go:370] acquiring machines lock for old-k8s-version-20220725164610-14919: {Name:mk039986a3467f394c941873ee88acd0fb616596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:46:11.264928   29782 start.go:374] acquired machines lock for "old-k8s-version-20220725164610-14919" in 146.57µs
	I0725 16:46:11.264955   29782 start.go:92] Provisioning new machine with config: &{Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 16:46:11.265041   29782 start.go:132] createHost starting for "" (driver="docker")
	I0725 16:46:11.308632   29782 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0725 16:46:11.308986   29782 start.go:166] libmachine.API.Create for "old-k8s-version-20220725164610-14919" (driver="docker")
	I0725 16:46:11.309033   29782 client.go:168] LocalClient.Create starting
	I0725 16:46:11.309249   29782 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem
	I0725 16:46:11.309324   29782 main.go:134] libmachine: Decoding PEM data...
	I0725 16:46:11.309354   29782 main.go:134] libmachine: Parsing certificate...
	I0725 16:46:11.309455   29782 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem
	I0725 16:46:11.309525   29782 main.go:134] libmachine: Decoding PEM data...
	I0725 16:46:11.309545   29782 main.go:134] libmachine: Parsing certificate...
	I0725 16:46:11.310589   29782 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220725164610-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 16:46:11.374877   29782 cli_runner.go:211] docker network inspect old-k8s-version-20220725164610-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 16:46:11.374975   29782 network_create.go:272] running [docker network inspect old-k8s-version-20220725164610-14919] to gather additional debugging logs...
	I0725 16:46:11.375008   29782 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220725164610-14919
	W0725 16:46:11.437925   29782 cli_runner.go:211] docker network inspect old-k8s-version-20220725164610-14919 returned with exit code 1
	I0725 16:46:11.437954   29782 network_create.go:275] error running [docker network inspect old-k8s-version-20220725164610-14919]: docker network inspect old-k8s-version-20220725164610-14919: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220725164610-14919
	I0725 16:46:11.437986   29782 network_create.go:277] output of [docker network inspect old-k8s-version-20220725164610-14919]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220725164610-14919
	
	** /stderr **
	I0725 16:46:11.438064   29782 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 16:46:11.501583   29782 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003cc1f0] misses:0}
	I0725 16:46:11.501623   29782 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:46:11.501646   29782 network_create.go:115] attempt to create docker network old-k8s-version-20220725164610-14919 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 16:46:11.501721   29782 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 old-k8s-version-20220725164610-14919
	W0725 16:46:11.565008   29782 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 old-k8s-version-20220725164610-14919 returned with exit code 1
	W0725 16:46:11.565042   29782 network_create.go:107] failed to create docker network old-k8s-version-20220725164610-14919 192.168.49.0/24, will retry: subnet is taken
	I0725 16:46:11.565341   29782 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003cc1f0] amended:false}} dirty:map[] misses:0}
	I0725 16:46:11.565359   29782 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:46:11.565585   29782 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003cc1f0] amended:true}} dirty:map[192.168.49.0:0xc0003cc1f0 192.168.58.0:0xc000ba1070] misses:0}
	I0725 16:46:11.565600   29782 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:46:11.565623   29782 network_create.go:115] attempt to create docker network old-k8s-version-20220725164610-14919 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0725 16:46:11.565691   29782 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 old-k8s-version-20220725164610-14919
	W0725 16:46:11.628990   29782 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 old-k8s-version-20220725164610-14919 returned with exit code 1
	W0725 16:46:11.629028   29782 network_create.go:107] failed to create docker network old-k8s-version-20220725164610-14919 192.168.58.0/24, will retry: subnet is taken
	I0725 16:46:11.629329   29782 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003cc1f0] amended:true}} dirty:map[192.168.49.0:0xc0003cc1f0 192.168.58.0:0xc000ba1070] misses:1}
	I0725 16:46:11.629345   29782 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:46:11.629547   29782 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003cc1f0] amended:true}} dirty:map[192.168.49.0:0xc0003cc1f0 192.168.58.0:0xc000ba1070 192.168.67.0:0xc000794100] misses:1}
	I0725 16:46:11.629567   29782 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 16:46:11.629575   29782 network_create.go:115] attempt to create docker network old-k8s-version-20220725164610-14919 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0725 16:46:11.629667   29782 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 old-k8s-version-20220725164610-14919
	I0725 16:46:11.726873   29782 network_create.go:99] docker network old-k8s-version-20220725164610-14919 192.168.67.0/24 created
	I0725 16:46:11.726914   29782 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20220725164610-14919" container
	I0725 16:46:11.727029   29782 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 16:46:11.795602   29782 cli_runner.go:164] Run: docker volume create old-k8s-version-20220725164610-14919 --label name.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 --label created_by.minikube.sigs.k8s.io=true
	I0725 16:46:11.860123   29782 oci.go:103] Successfully created a docker volume old-k8s-version-20220725164610-14919
	I0725 16:46:11.860240   29782 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220725164610-14919-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 --entrypoint /usr/bin/test -v old-k8s-version-20220725164610-14919:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 16:46:12.353546   29782 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220725164610-14919
	I0725 16:46:12.353597   29782 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:46:12.353611   29782 kic.go:179] Starting extracting preloaded images to volume ...
	I0725 16:46:12.353701   29782 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220725164610-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 16:46:16.173221   29782 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220725164610-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (3.819359251s)
	I0725 16:46:16.173249   29782 kic.go:188] duration metric: took 3.819603 seconds to extract preloaded images to volume
	I0725 16:46:16.173370   29782 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 16:46:16.308706   29782 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220725164610-14919 --name old-k8s-version-20220725164610-14919 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220725164610-14919 --network old-k8s-version-20220725164610-14919 --ip 192.168.67.2 --volume old-k8s-version-20220725164610-14919:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 16:46:16.685659   29782 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Running}}
	I0725 16:46:16.763483   29782 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:46:16.849007   29782 cli_runner.go:164] Run: docker exec old-k8s-version-20220725164610-14919 stat /var/lib/dpkg/alternatives/iptables
	I0725 16:46:16.979886   29782 oci.go:144] the created container "old-k8s-version-20220725164610-14919" has a running status.
	I0725 16:46:16.979929   29782 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa...
	I0725 16:46:17.050673   29782 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 16:46:17.203041   29782 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:46:17.275157   29782 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 16:46:17.275176   29782 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220725164610-14919 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 16:46:17.404271   29782 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:46:17.476561   29782 machine.go:88] provisioning docker machine ...
	I0725 16:46:17.476599   29782 ubuntu.go:169] provisioning hostname "old-k8s-version-20220725164610-14919"
	I0725 16:46:17.476719   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:17.549516   29782 main.go:134] libmachine: Using SSH client type: native
	I0725 16:46:17.549719   29782 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50345 <nil> <nil>}
	I0725 16:46:17.549738   29782 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220725164610-14919 && echo "old-k8s-version-20220725164610-14919" | sudo tee /etc/hostname
	I0725 16:46:17.686312   29782 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220725164610-14919
	
	I0725 16:46:17.686423   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:17.760797   29782 main.go:134] libmachine: Using SSH client type: native
	I0725 16:46:17.760969   29782 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50345 <nil> <nil>}
	I0725 16:46:17.760986   29782 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220725164610-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220725164610-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220725164610-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:46:17.884514   29782 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:46:17.884536   29782 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:46:17.884557   29782 ubuntu.go:177] setting up certificates
	I0725 16:46:17.884564   29782 provision.go:83] configureAuth start
	I0725 16:46:17.884632   29782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:46:17.957910   29782 provision.go:138] copyHostCerts
	I0725 16:46:17.957991   29782 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:46:17.958000   29782 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:46:17.958105   29782 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:46:17.958288   29782 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:46:17.958299   29782 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:46:17.958367   29782 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:46:17.958502   29782 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:46:17.958508   29782 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:46:17.958565   29782 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:46:17.958693   29782 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220725164610-14919 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220725164610-14919]
	I0725 16:46:18.122713   29782 provision.go:172] copyRemoteCerts
	I0725 16:46:18.122771   29782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:46:18.122818   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:18.194469   29782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50345 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:46:18.281037   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:46:18.297284   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0725 16:46:18.314349   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 16:46:18.331082   29782 provision.go:86] duration metric: configureAuth took 446.500407ms
	I0725 16:46:18.331094   29782 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:46:18.331234   29782 config.go:178] Loaded profile config "old-k8s-version-20220725164610-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:46:18.331297   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:18.403902   29782 main.go:134] libmachine: Using SSH client type: native
	I0725 16:46:18.404048   29782 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50345 <nil> <nil>}
	I0725 16:46:18.404070   29782 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:46:18.525407   29782 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:46:18.525422   29782 ubuntu.go:71] root file system type: overlay
	I0725 16:46:18.525592   29782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:46:18.525688   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:18.597376   29782 main.go:134] libmachine: Using SSH client type: native
	I0725 16:46:18.597534   29782 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50345 <nil> <nil>}
	I0725 16:46:18.598102   29782 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:46:18.725150   29782 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:46:18.725252   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:18.796830   29782 main.go:134] libmachine: Using SSH client type: native
	I0725 16:46:18.797062   29782 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50345 <nil> <nil>}
	I0725 16:46:18.797075   29782 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:46:19.403756   29782 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 23:46:18.730640370 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 16:46:19.403778   29782 machine.go:91] provisioned docker machine in 1.927180611s
	I0725 16:46:19.403784   29782 client.go:171] LocalClient.Create took 8.094669795s
	I0725 16:46:19.403800   29782 start.go:174] duration metric: libmachine.API.Create for "old-k8s-version-20220725164610-14919" took 8.094742401s
	I0725 16:46:19.403810   29782 start.go:307] post-start starting for "old-k8s-version-20220725164610-14919" (driver="docker")
	I0725 16:46:19.403816   29782 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:46:19.403880   29782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:46:19.403930   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:19.478018   29782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50345 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:46:19.565626   29782 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:46:19.568982   29782 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:46:19.568996   29782 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:46:19.569002   29782 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:46:19.569010   29782 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:46:19.569019   29782 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:46:19.569132   29782 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:46:19.569267   29782 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:46:19.569426   29782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:46:19.576479   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:46:19.593706   29782 start.go:310] post-start completed in 189.885895ms
	I0725 16:46:19.594314   29782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:46:19.668193   29782 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json ...
	I0725 16:46:19.668589   29782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:46:19.668635   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:19.740873   29782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50345 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:46:19.826649   29782 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:46:19.830981   29782 start.go:135] duration metric: createHost completed in 8.565855596s
	I0725 16:46:19.830995   29782 start.go:82] releasing machines lock for "old-k8s-version-20220725164610-14919", held for 8.56598057s
	I0725 16:46:19.831066   29782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:46:19.902909   29782 ssh_runner.go:195] Run: systemctl --version
	I0725 16:46:19.902910   29782 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:46:19.902977   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:19.903011   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:19.983800   29782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50345 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:46:19.985718   29782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50345 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:46:20.294735   29782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:46:20.305284   29782 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:46:20.305353   29782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:46:20.314475   29782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:46:20.327435   29782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:46:20.400138   29782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:46:20.470700   29782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:46:20.559602   29782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:46:20.768117   29782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:46:20.805929   29782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:46:20.863347   29782 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 16:46:20.863495   29782 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220725164610-14919 dig +short host.docker.internal
	I0725 16:46:20.995516   29782 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:46:20.995618   29782 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:46:21.000347   29782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:46:21.010406   29782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:46:21.082389   29782 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:46:21.082453   29782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:46:21.114056   29782 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:46:21.114075   29782 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:46:21.114148   29782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:46:21.144305   29782 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:46:21.144319   29782 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:46:21.144394   29782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:46:21.220443   29782 cni.go:95] Creating CNI manager for ""
	I0725 16:46:21.220455   29782 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:46:21.220473   29782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:46:21.220489   29782 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220725164610-14919 NodeName:old-k8s-version-20220725164610-14919 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:46:21.220592   29782 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220725164610-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220725164610-14919
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:46:21.220686   29782 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220725164610-14919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:46:21.220754   29782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 16:46:21.228505   29782 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:46:21.228570   29782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:46:21.235672   29782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0725 16:46:21.248816   29782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:46:21.261314   29782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 16:46:21.274744   29782 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:46:21.278699   29782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:46:21.288108   29782 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919 for IP: 192.168.67.2
	I0725 16:46:21.288228   29782 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:46:21.288276   29782 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:46:21.288324   29782 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.key
	I0725 16:46:21.288338   29782 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.crt with IP's: []
	I0725 16:46:21.455252   29782 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.crt ...
	I0725 16:46:21.455267   29782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.crt: {Name:mkba6d49e732aafbde0f5c116921b417f7520b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:46:21.455596   29782 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.key ...
	I0725 16:46:21.455605   29782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.key: {Name:mkcc4da1b55f9ac8437ce38c3418fabda4e6bba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:46:21.455808   29782 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key.c7fa3a9e
	I0725 16:46:21.455826   29782 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 16:46:21.561242   29782 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt.c7fa3a9e ...
	I0725 16:46:21.561257   29782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt.c7fa3a9e: {Name:mkcf9d43af0ccbdc9ff1f78ea99457959e637fd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:46:21.561524   29782 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key.c7fa3a9e ...
	I0725 16:46:21.561532   29782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key.c7fa3a9e: {Name:mkc804d5234ce41537260b885f3b20b29b3fa423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:46:21.561797   29782 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt
	I0725 16:46:21.561944   29782 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key
	I0725 16:46:21.562092   29782 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key
	I0725 16:46:21.562106   29782 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.crt with IP's: []
	I0725 16:46:21.792130   29782 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.crt ...
	I0725 16:46:21.792147   29782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.crt: {Name:mk121670e71eb11856a11e2556c62e7f208ca4a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:46:21.792468   29782 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key ...
	I0725 16:46:21.792476   29782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key: {Name:mkf976ad498043ffcd3566cd738d6ac1bb298be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:46:21.792903   29782 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:46:21.792944   29782 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:46:21.792953   29782 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:46:21.792982   29782 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:46:21.793009   29782 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:46:21.793038   29782 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:46:21.793098   29782 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:46:21.793530   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:46:21.812144   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 16:46:21.828779   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:46:21.844949   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:46:21.861464   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:46:21.878524   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:46:21.895836   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:46:21.913606   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:46:21.931503   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:46:21.950632   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:46:21.967673   29782 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:46:21.986710   29782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:46:21.999366   29782 ssh_runner.go:195] Run: openssl version
	I0725 16:46:22.004836   29782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:46:22.013409   29782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:46:22.017276   29782 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:46:22.017323   29782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:46:22.022491   29782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:46:22.030754   29782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:46:22.038417   29782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:46:22.043141   29782 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:46:22.043182   29782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:46:22.048426   29782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:46:22.056232   29782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:46:22.064198   29782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:46:22.068191   29782 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:46:22.068234   29782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:46:22.073340   29782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:46:22.080818   29782 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:46:22.080951   29782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:46:22.109891   29782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:46:22.117611   29782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:46:22.125078   29782 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:46:22.125134   29782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:46:22.133021   29782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:46:22.133046   29782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:46:22.891995   29782 out.go:204]   - Generating certificates and keys ...
	I0725 16:46:25.333087   29782 out.go:204]   - Booting up control plane ...
	W0725 16:48:20.245972   29782 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220725164610-14919 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220725164610-14919 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220725164610-14919 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220725164610-14919 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 16:48:20.246004   29782 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:48:20.666578   29782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:48:20.676169   29782 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:48:20.676227   29782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:48:20.684004   29782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:48:20.684035   29782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:48:21.439261   29782 out.go:204]   - Generating certificates and keys ...
	I0725 16:48:22.253755   29782 out.go:204]   - Booting up control plane ...
	I0725 16:50:17.204789   29782 kubeadm.go:397] StartCluster complete in 3m55.088856621s
	I0725 16:50:17.204866   29782 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:50:17.233794   29782 logs.go:274] 0 containers: []
	W0725 16:50:17.233805   29782 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:50:17.233863   29782 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:50:17.262487   29782 logs.go:274] 0 containers: []
	W0725 16:50:17.262500   29782 logs.go:276] No container was found matching "etcd"
	I0725 16:50:17.262558   29782 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:50:17.290903   29782 logs.go:274] 0 containers: []
	W0725 16:50:17.290915   29782 logs.go:276] No container was found matching "coredns"
	I0725 16:50:17.290989   29782 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:50:17.322903   29782 logs.go:274] 0 containers: []
	W0725 16:50:17.322914   29782 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:50:17.322973   29782 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:50:17.354086   29782 logs.go:274] 0 containers: []
	W0725 16:50:17.354099   29782 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:50:17.354165   29782 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:50:17.386984   29782 logs.go:274] 0 containers: []
	W0725 16:50:17.386997   29782 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:50:17.387061   29782 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:50:17.418252   29782 logs.go:274] 0 containers: []
	W0725 16:50:17.418264   29782 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:50:17.418324   29782 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:50:17.447478   29782 logs.go:274] 0 containers: []
	W0725 16:50:17.447493   29782 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:50:17.447502   29782 logs.go:123] Gathering logs for dmesg ...
	I0725 16:50:17.447510   29782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:50:17.459218   29782 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:50:17.459230   29782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:50:17.513431   29782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:50:17.513443   29782 logs.go:123] Gathering logs for Docker ...
	I0725 16:50:17.513451   29782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:50:17.528588   29782 logs.go:123] Gathering logs for container status ...
	I0725 16:50:17.528602   29782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:50:19.583068   29782 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054395585s)
	I0725 16:50:19.583215   29782 logs.go:123] Gathering logs for kubelet ...
	I0725 16:50:19.583222   29782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0725 16:50:19.624134   29782 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 16:50:19.624157   29782 out.go:239] * 
	* 
	W0725 16:50:19.624283   29782 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 16:50:19.624296   29782 out.go:239] * 
	* 
	W0725 16:50:19.624852   29782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 16:50:19.688626   29782 out.go:177] 
	W0725 16:50:19.731051   29782 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 16:50:19.731194   29782 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 16:50:19.731267   29782 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 16:50:19.773651   29782 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220725164610-14919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725164610-14919
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725164610-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf",
	        "Created": "2022-07-25T23:46:16.38043483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225636,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:46:16.682079128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hosts",
	        "LogPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf-json.log",
	        "Name": "/old-k8s-version-20220725164610-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725164610-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725164610-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725164610-14919",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725164610-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725164610-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2e0e898d7aecc974586f5a52d5113cb10ba43580ba7f36615d084c19e3b3031",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50345"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50346"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50347"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50348"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50349"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c2e0e898d7ae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725164610-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3e45dea9c014",
	                        "old-k8s-version-20220725164610-14919"
	                    ],
	                    "NetworkID": "cc2155f0f89448c4255b6f474f0b34c64b5460d3acc5441984909bacee63d7d6",
	                    "EndpointID": "463fd2f847a1a0964bd41bd7713c5c525e0b8be204ab1966d296c2aed2692d2b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 6 (464.672445ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:50:20.394855   30474 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220725164610-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220725164610-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (249.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220725164610-14919 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220725164610-14919 create -f testdata/busybox.yaml: exit status 1 (29.81183ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220725164610-14919" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-20220725164610-14919 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725164610-14919
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725164610-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf",
	        "Created": "2022-07-25T23:46:16.38043483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225636,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:46:16.682079128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hosts",
	        "LogPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf-json.log",
	        "Name": "/old-k8s-version-20220725164610-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725164610-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725164610-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725164610-14919",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725164610-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725164610-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2e0e898d7aecc974586f5a52d5113cb10ba43580ba7f36615d084c19e3b3031",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50345"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50346"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50347"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50348"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50349"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c2e0e898d7ae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725164610-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3e45dea9c014",
	                        "old-k8s-version-20220725164610-14919"
	                    ],
	                    "NetworkID": "cc2155f0f89448c4255b6f474f0b34c64b5460d3acc5441984909bacee63d7d6",
	                    "EndpointID": "463fd2f847a1a0964bd41bd7713c5c525e0b8be204ab1966d296c2aed2692d2b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 6 (448.883364ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:50:20.952485   30487 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220725164610-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220725164610-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725164610-14919
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725164610-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf",
	        "Created": "2022-07-25T23:46:16.38043483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225636,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:46:16.682079128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hosts",
	        "LogPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf-json.log",
	        "Name": "/old-k8s-version-20220725164610-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725164610-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725164610-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725164610-14919",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725164610-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725164610-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2e0e898d7aecc974586f5a52d5113cb10ba43580ba7f36615d084c19e3b3031",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50345"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50346"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50347"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50348"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50349"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c2e0e898d7ae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725164610-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3e45dea9c014",
	                        "old-k8s-version-20220725164610-14919"
	                    ],
	                    "NetworkID": "cc2155f0f89448c4255b6f474f0b34c64b5460d3acc5441984909bacee63d7d6",
	                    "EndpointID": "463fd2f847a1a0964bd41bd7713c5c525e0b8be204ab1966d296c2aed2692d2b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 6 (455.717379ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:50:21.480810   30499 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220725164610-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220725164610-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220725164610-14919 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0725 16:50:26.796568   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:50:34.039368   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:53.734990   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:50:55.131904   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:55.137062   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:55.147142   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:55.167861   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:55.209867   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:55.292082   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:55.454372   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:55.774988   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:56.415260   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.228557   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.233979   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.246216   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.266508   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.308718   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.390871   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.551181   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.697041   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:57.872091   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:58.514410   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:59.453141   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:59.794713   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:00.257258   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:02.354884   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:05.377430   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:07.475442   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:07.757516   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:51:10.672279   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:51:14.290826   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:51:15.001953   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:15.619611   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:17.716512   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:25.080027   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:51:27.143307   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:36.099945   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:38.196801   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:51:42.015967   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:51:43.987617   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220725164610-14919 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.180896947s)

                                                
                                                
-- stdout --
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220725164610-14919 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220725164610-14919 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220725164610-14919 describe deploy/metrics-server -n kube-system: exit status 1 (31.015146ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220725164610-14919" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220725164610-14919 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725164610-14919
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725164610-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf",
	        "Created": "2022-07-25T23:46:16.38043483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225636,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:46:16.682079128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hosts",
	        "LogPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf-json.log",
	        "Name": "/old-k8s-version-20220725164610-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725164610-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725164610-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725164610-14919",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725164610-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725164610-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2e0e898d7aecc974586f5a52d5113cb10ba43580ba7f36615d084c19e3b3031",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50345"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50346"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50347"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50348"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50349"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c2e0e898d7ae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725164610-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3e45dea9c014",
	                        "old-k8s-version-20220725164610-14919"
	                    ],
	                    "NetworkID": "cc2155f0f89448c4255b6f474f0b34c64b5460d3acc5441984909bacee63d7d6",
	                    "EndpointID": "463fd2f847a1a0964bd41bd7713c5c525e0b8be204ab1966d296c2aed2692d2b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 6 (450.17057ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:51:51.218041   30613 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220725164610-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220725164610-14919" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (492.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220725164610-14919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0725 16:51:55.966254   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:52:11.678541   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:52:17.062186   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:52:19.158554   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:52:29.680555   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:52:36.923888   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:53:30.439771   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:53:38.982928   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:53:41.079951   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:53:41.240015   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220725164610-14919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m7.168294575s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220725164610-14919] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220725164610-14919 in cluster old-k8s-version-20220725164610-14919
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220725164610-14919" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:51:53.294201   30645 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:51:53.294366   30645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:51:53.294371   30645 out.go:309] Setting ErrFile to fd 2...
	I0725 16:51:53.294375   30645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:51:53.294471   30645 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:51:53.294941   30645 out.go:303] Setting JSON to false
	I0725 16:51:53.309887   30645 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10036,"bootTime":1658783077,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:51:53.309984   30645 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:51:53.331402   30645 out.go:177] * [old-k8s-version-20220725164610-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:51:53.373600   30645 notify.go:193] Checking for updates...
	I0725 16:51:53.395513   30645 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:51:53.417111   30645 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:51:53.438407   30645 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:51:53.459736   30645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:51:53.481553   30645 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:51:53.504223   30645 config.go:178] Loaded profile config "old-k8s-version-20220725164610-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:51:53.526315   30645 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0725 16:51:53.547450   30645 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:51:53.618847   30645 docker.go:137] docker version: linux-20.10.17
	I0725 16:51:53.618995   30645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:51:53.753067   30645 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:51:53.688740284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:51:53.796714   30645 out.go:177] * Using the docker driver based on existing profile
	I0725 16:51:53.817466   30645 start.go:284] selected driver: docker
	I0725 16:51:53.817494   30645 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:53.817613   30645 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:51:53.820630   30645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:51:53.953927   30645 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:51:53.891132742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:51:53.954103   30645 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:51:53.954124   30645 cni.go:95] Creating CNI manager for ""
	I0725 16:51:53.954135   30645 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:51:53.954143   30645 start_flags.go:310] config:
	{Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:53.997664   30645 out.go:177] * Starting control plane node old-k8s-version-20220725164610-14919 in cluster old-k8s-version-20220725164610-14919
	I0725 16:51:54.018754   30645 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:51:54.039707   30645 out.go:177] * Pulling base image ...
	I0725 16:51:54.082764   30645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:51:54.082795   30645 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:51:54.082852   30645 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 16:51:54.082881   30645 cache.go:57] Caching tarball of preloaded images
	I0725 16:51:54.083082   30645 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:51:54.083106   30645 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 16:51:54.084260   30645 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json ...
	I0725 16:51:54.147078   30645 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:51:54.147095   30645 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:51:54.147107   30645 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:51:54.147181   30645 start.go:370] acquiring machines lock for old-k8s-version-20220725164610-14919: {Name:mk039986a3467f394c941873ee88acd0fb616596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:51:54.147261   30645 start.go:374] acquired machines lock for "old-k8s-version-20220725164610-14919" in 61.057µs
	I0725 16:51:54.147278   30645 start.go:95] Skipping create...Using existing machine configuration
	I0725 16:51:54.147288   30645 fix.go:55] fixHost starting: 
	I0725 16:51:54.147527   30645 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:51:54.215341   30645 fix.go:103] recreateIfNeeded on old-k8s-version-20220725164610-14919: state=Stopped err=<nil>
	W0725 16:51:54.215374   30645 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 16:51:54.259242   30645 out.go:177] * Restarting existing docker container for "old-k8s-version-20220725164610-14919" ...
	I0725 16:51:54.284887   30645 cli_runner.go:164] Run: docker start old-k8s-version-20220725164610-14919
	I0725 16:51:54.645993   30645 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:51:54.722808   30645 kic.go:415] container "old-k8s-version-20220725164610-14919" state is running.
	I0725 16:51:54.723439   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:54.808300   30645 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json ...
	I0725 16:51:54.808762   30645 machine.go:88] provisioning docker machine ...
	I0725 16:51:54.808790   30645 ubuntu.go:169] provisioning hostname "old-k8s-version-20220725164610-14919"
	I0725 16:51:54.808863   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:54.891385   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:54.891620   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:54.891634   30645 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220725164610-14919 && echo "old-k8s-version-20220725164610-14919" | sudo tee /etc/hostname
	I0725 16:51:55.024662   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220725164610-14919
	
	I0725 16:51:55.024757   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.103341   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.103525   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.103544   30645 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220725164610-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220725164610-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220725164610-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:51:55.230047   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:51:55.230076   30645 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:51:55.230107   30645 ubuntu.go:177] setting up certificates
	I0725 16:51:55.230119   30645 provision.go:83] configureAuth start
	I0725 16:51:55.230190   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:55.301676   30645 provision.go:138] copyHostCerts
	I0725 16:51:55.301768   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:51:55.301778   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:51:55.301894   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:51:55.302095   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:51:55.302104   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:51:55.302175   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:51:55.302315   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:51:55.302321   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:51:55.302379   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:51:55.302507   30645 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220725164610-14919 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220725164610-14919]
	I0725 16:51:55.405165   30645 provision.go:172] copyRemoteCerts
	I0725 16:51:55.405225   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:51:55.405293   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.477166   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:55.565264   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:51:55.582096   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0725 16:51:55.599314   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 16:51:55.616047   30645 provision.go:86] duration metric: configureAuth took 385.912561ms
	I0725 16:51:55.616059   30645 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:51:55.616211   30645 config.go:178] Loaded profile config "old-k8s-version-20220725164610-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:51:55.616261   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.687491   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.687629   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.687638   30645 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:51:55.809152   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:51:55.809170   30645 ubuntu.go:71] root file system type: overlay
	I0725 16:51:55.809333   30645 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:51:55.809407   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.886743   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.886909   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.886957   30645 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:51:56.015134   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:51:56.015230   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.087087   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:56.087253   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:56.087280   30645 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:51:56.212027   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:51:56.212044   30645 machine.go:91] provisioned docker machine in 1.403264453s
	I0725 16:51:56.212055   30645 start.go:307] post-start starting for "old-k8s-version-20220725164610-14919" (driver="docker")
	I0725 16:51:56.212061   30645 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:51:56.212133   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:51:56.212177   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.283031   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.372939   30645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:51:56.376433   30645 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:51:56.376447   30645 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:51:56.376454   30645 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:51:56.376458   30645 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:51:56.376467   30645 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:51:56.376572   30645 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:51:56.376727   30645 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:51:56.376875   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:51:56.383744   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:51:56.400937   30645 start.go:310] post-start completed in 188.872215ms
	I0725 16:51:56.401013   30645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:51:56.401059   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.472425   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.558421   30645 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:51:56.562865   30645 fix.go:57] fixHost completed within 2.41556105s
	I0725 16:51:56.562873   30645 start.go:82] releasing machines lock for "old-k8s-version-20220725164610-14919", held for 2.415589014s
	I0725 16:51:56.562940   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:56.634630   30645 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:51:56.634634   30645 ssh_runner.go:195] Run: systemctl --version
	I0725 16:51:56.634711   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.634710   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.712937   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.715060   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:57.028274   30645 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:51:57.039409   30645 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:51:57.039463   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:51:57.050978   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:51:57.064294   30645 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:51:57.131183   30645 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:51:57.197441   30645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:51:57.258729   30645 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:51:57.458205   30645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:51:57.493961   30645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:51:57.573579   30645 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 16:51:57.573720   30645 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220725164610-14919 dig +short host.docker.internal
	I0725 16:51:57.708897   30645 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:51:57.708998   30645 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:51:57.713113   30645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:51:57.723064   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:57.796445   30645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:51:57.796515   30645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:51:57.828170   30645 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:51:57.828195   30645 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:51:57.828273   30645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:51:57.862686   30645 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:51:57.862711   30645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:51:57.862784   30645 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:51:57.934841   30645 cni.go:95] Creating CNI manager for ""
	I0725 16:51:57.934857   30645 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:51:57.934882   30645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:51:57.934897   30645 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220725164610-14919 NodeName:old-k8s-version-20220725164610-14919 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:51:57.934999   30645 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220725164610-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220725164610-14919
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:51:57.935085   30645 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220725164610-14919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:51:57.935149   30645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 16:51:57.942882   30645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:51:57.942933   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:51:57.949836   30645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0725 16:51:57.962118   30645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:51:57.974768   30645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 16:51:57.987611   30645 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:51:57.991547   30645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:51:58.001422   30645 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919 for IP: 192.168.67.2
	I0725 16:51:58.001534   30645 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:51:58.001584   30645 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:51:58.001665   30645 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.key
	I0725 16:51:58.001725   30645 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key.c7fa3a9e
	I0725 16:51:58.001774   30645 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key
	I0725 16:51:58.001977   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:51:58.002018   30645 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:51:58.002033   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:51:58.002065   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:51:58.002099   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:51:58.002130   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:51:58.002200   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:51:58.002745   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:51:58.019176   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 16:51:58.035937   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:51:58.052722   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:51:58.069150   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:51:58.086282   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:51:58.104583   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:51:58.122151   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:51:58.138902   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:51:58.155678   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:51:58.172462   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:51:58.189680   30645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:51:58.202927   30645 ssh_runner.go:195] Run: openssl version
	I0725 16:51:58.208487   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:51:58.216327   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.220281   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.220320   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.225423   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:51:58.232569   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:51:58.240681   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.246603   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.246655   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.252424   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:51:58.259635   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:51:58.267350   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.271022   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.271059   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.276368   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:51:58.285978   30645 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:58.286085   30645 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:51:58.315858   30645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:51:58.326514   30645 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 16:51:58.326531   30645 kubeadm.go:626] restartCluster start
	I0725 16:51:58.326585   30645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 16:51:58.333523   30645 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.333587   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:58.406233   30645 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220725164610-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:51:58.406423   30645 kubeconfig.go:127] "old-k8s-version-20220725164610-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 16:51:58.406758   30645 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:51:58.408147   30645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 16:51:58.416141   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.416194   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.424141   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.624252   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.624449   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.634727   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.824496   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.824556   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.833401   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.024564   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.024765   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.036943   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.224262   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.224449   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.234247   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.426277   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.426421   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.436848   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.624325   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.624444   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.634776   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.824436   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.824539   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.833466   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.024667   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.024784   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.034119   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.226332   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.226493   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.237410   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.424816   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.424991   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.435741   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.624358   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.624553   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.634929   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.824246   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.824311   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.833267   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.025582   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.025682   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.036617   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.226302   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.226523   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.237134   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.424681   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.424896   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.434950   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.434960   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.435004   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.443251   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.443262   30645 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 16:52:01.443270   30645 kubeadm.go:1092] stopping kube-system containers ...
	I0725 16:52:01.443330   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:52:01.472271   30645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 16:52:01.482849   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:52:01.490579   30645 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Jul 25 23:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jul 25 23:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Jul 25 23:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jul 25 23:48 /etc/kubernetes/scheduler.conf
	
	I0725 16:52:01.490646   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 16:52:01.497991   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 16:52:01.505650   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 16:52:01.513404   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 16:52:01.520481   30645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:52:01.528605   30645 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 16:52:01.528616   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:01.582488   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.177208   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.396495   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.452157   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.507122   30645 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:52:02.507183   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:03.017988   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:03.516813   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:04.016024   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:04.516243   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:05.016052   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:05.516842   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:06.018016   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:06.516243   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:07.016833   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:07.516509   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:08.018237   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:08.516285   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:09.018225   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:09.516196   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:10.016108   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:10.518092   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:11.016235   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:11.516051   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:12.017661   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:12.517835   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:13.017094   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:13.517087   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:14.016089   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:14.516418   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:15.016429   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:15.516149   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:16.016347   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:16.516154   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:17.016835   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:17.516145   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:18.016344   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:18.516408   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:19.016498   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:19.517496   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:20.016992   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:20.516251   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:21.016222   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:21.517681   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:22.016475   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:22.516287   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:23.018246   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:23.516453   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:24.016928   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:24.518267   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:25.016180   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:25.517130   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:26.016427   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:26.516198   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:27.018318   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:27.518273   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:28.017144   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:28.517115   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:29.016589   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:29.516148   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:30.018359   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:30.516196   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:31.016729   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:31.516466   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:32.016321   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:32.516187   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:33.016955   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:33.518380   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:34.016250   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:34.518380   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:35.017698   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:35.516226   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:36.016845   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:36.517175   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:37.016458   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:37.518343   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:38.017221   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:38.516631   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:39.018346   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:39.517031   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:40.016587   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:40.518374   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:41.017168   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:41.516254   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:42.016786   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:42.518371   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:43.016708   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:43.517350   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:44.016879   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:44.516359   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:45.016326   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:45.517079   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:46.018104   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:46.516554   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:47.016350   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:47.516869   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:48.016960   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:48.518539   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:49.016387   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:49.518485   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:50.016779   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:50.516308   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:51.016390   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:51.516855   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:52.016682   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:52.516798   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:53.017157   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:53.516791   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:54.018461   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:54.518509   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:55.016394   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:55.518239   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:56.016393   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:56.516649   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:57.018403   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:57.518492   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:58.016728   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:58.516610   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:59.016695   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:59.516374   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:00.018527   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:00.516554   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:01.016461   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:01.518568   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:02.018357   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:02.516570   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:02.551458   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.551470   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:02.551529   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:02.580662   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.580676   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:02.580736   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:02.609061   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.609077   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:02.609153   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:02.637777   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.637789   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:02.637848   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:02.668016   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.668032   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:02.668098   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:02.695681   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.695695   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:02.695759   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:02.724166   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.724179   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:02.724241   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:02.752726   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.752738   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:02.752745   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:02.752752   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:02.766718   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:02.766729   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:04.817904   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051150373s)
	I0725 16:53:04.818052   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:04.818058   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:04.859354   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:04.859367   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:04.872868   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:04.872886   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:04.925729   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:07.427981   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:07.518459   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:07.547888   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.547903   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:07.547963   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:07.577077   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.577088   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:07.577149   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:07.605370   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.605382   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:07.605438   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:07.634582   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.634594   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:07.634664   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:07.662717   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.662730   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:07.662796   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:07.690179   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.690191   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:07.690247   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:07.718778   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.718797   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:07.718860   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:07.750543   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.750557   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:07.750566   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:07.750582   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:07.813932   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:07.813946   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:07.813953   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:07.830288   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:07.830306   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:09.887017   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056682522s)
	I0725 16:53:09.887208   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:09.887216   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:09.934241   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:09.934269   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:12.447495   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:12.517256   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:12.548709   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.548724   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:12.548801   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:12.581560   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.581573   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:12.581636   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:12.613258   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.613277   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:12.613356   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:12.645116   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.645132   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:12.645192   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:12.678405   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.678430   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:12.678496   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:12.709850   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.709862   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:12.709929   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:12.739704   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.739717   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:12.739780   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:12.771373   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.771390   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:12.771397   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:12.771409   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:14.832595   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061157284s)
	I0725 16:53:14.832749   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:14.832760   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:14.882568   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:14.882589   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:14.894614   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:14.894627   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:14.964822   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:14.964845   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:14.964855   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:17.480696   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:17.516779   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:17.560432   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.560445   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:17.560504   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:17.590394   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.590408   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:17.590480   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:17.620155   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.620169   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:17.620234   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:17.651346   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.651376   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:17.651448   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:17.683049   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.683062   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:17.683121   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:17.720876   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.720905   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:17.720964   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:17.768214   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.768254   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:17.768357   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:17.800978   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.800991   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:17.800999   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:17.801005   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:17.814855   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:17.814871   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:19.878600   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063699299s)
	I0725 16:53:19.878715   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:19.878726   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:19.927808   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:19.927830   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:19.942138   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:19.942177   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:20.000061   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:22.501063   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:22.516620   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:22.546166   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.546178   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:22.546235   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:22.574812   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.574824   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:22.574886   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:22.604962   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.604974   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:22.605036   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:22.636264   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.636278   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:22.636339   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:22.665920   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.665932   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:22.665993   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:22.696167   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.696179   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:22.696236   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:22.729381   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.729392   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:22.729454   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:22.768159   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.768172   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:22.768207   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:22.768215   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:22.813804   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:22.813818   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:22.826686   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:22.826700   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:22.889943   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:22.889958   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:22.889964   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:22.905871   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:22.905885   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:24.961550   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055639315s)
	I0725 16:53:27.462514   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:27.516705   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:27.547013   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.547025   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:27.547088   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:27.575083   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.575095   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:27.575151   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:27.607755   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.607767   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:27.607822   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:27.636173   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.636184   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:27.636251   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:27.664856   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.664867   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:27.664930   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:27.695642   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.695655   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:27.695717   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:27.725344   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.725358   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:27.725417   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:27.754182   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.754195   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:27.754202   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:27.754208   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:27.767896   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:27.767911   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:27.824064   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:27.824076   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:27.824083   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:27.838119   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:27.838131   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:29.892047   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053889683s)
	I0725 16:53:29.892158   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:29.892165   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:32.435110   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:32.516701   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:32.562525   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.562538   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:32.562604   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:32.599075   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.599087   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:32.599145   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:32.640588   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.640615   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:32.640684   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:32.675235   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.675248   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:32.675311   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:32.711380   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.711392   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:32.711462   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:32.745360   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.745373   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:32.745433   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:32.782468   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.782484   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:32.782569   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:32.815537   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.815551   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:32.815557   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:32.815565   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:32.828567   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:32.828584   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:32.884919   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:32.884933   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:32.884941   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:32.900762   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:32.900776   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:34.964971   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064168222s)
	I0725 16:53:34.965217   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:34.965226   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:37.509560   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:38.016974   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:38.049541   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.049558   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:38.049618   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:38.080721   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.080733   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:38.080816   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:38.109733   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.109744   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:38.109803   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:38.141301   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.141313   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:38.141400   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:38.172007   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.172020   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:38.172078   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:38.204450   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.204463   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:38.204520   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:38.234269   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.234281   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:38.234336   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:38.263197   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.263210   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:38.263217   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:38.263223   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:40.321875   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058620912s)
	I0725 16:53:40.321982   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:40.321997   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:40.368300   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:40.368320   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:40.382186   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:40.382201   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:40.442970   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:40.442981   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:40.442987   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:42.961513   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:43.017747   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:43.047988   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.048000   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:43.048060   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:43.082642   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.082655   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:43.082783   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:43.112812   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.112825   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:43.112882   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:43.142469   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.142480   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:43.142543   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:43.172983   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.172996   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:43.173055   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:43.202378   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.202390   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:43.202456   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:43.232448   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.232462   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:43.232525   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:43.262110   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.262123   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:43.262132   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:43.262140   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:45.319732   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057561012s)
	I0725 16:53:45.319846   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:45.319854   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:45.365923   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:45.365943   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:45.379753   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:45.379771   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:45.457284   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:45.457297   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:45.457305   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:47.975040   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:48.018317   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:48.049476   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.049489   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:48.049548   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:48.078953   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.078965   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:48.079037   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:48.109058   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.109071   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:48.109129   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:48.139159   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.139172   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:48.139228   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:48.169256   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.169267   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:48.169325   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:48.201872   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.201885   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:48.201948   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:48.234103   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.234115   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:48.234178   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:48.266166   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.266179   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:48.266186   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:48.266197   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:48.314601   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:48.318681   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:48.332826   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:48.332841   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:48.388055   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:48.388067   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:48.388075   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:48.402457   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:48.402469   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:50.456667   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054172699s)
	I0725 16:53:52.958273   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:53.018286   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:53.051254   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.051266   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:53.051325   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:53.080846   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.080858   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:53.080914   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:53.109160   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.109183   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:53.109257   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:53.137615   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.137628   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:53.137684   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:53.167697   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.167709   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:53.167765   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:53.198156   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.198169   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:53.198278   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:53.227704   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.227716   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:53.227773   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:53.257307   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.257320   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:53.257327   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:53.257336   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:53.299296   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:53.317934   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:53.330698   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:53.330712   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:53.385054   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:53.385066   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:53.385073   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:53.399132   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:53.399145   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:55.451174   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052002587s)
	I0725 16:53:57.951589   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:58.016855   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:58.049205   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.049216   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:58.049274   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:58.079929   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.079941   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:58.080000   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:58.109713   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.109725   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:58.109785   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:58.138994   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.139008   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:58.139116   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:58.168661   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.168675   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:58.168733   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:58.197795   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.197807   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:58.197867   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:58.226708   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.226719   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:58.226777   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:58.255098   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.255109   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:58.255116   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:58.255123   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:58.295859   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:58.317170   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:58.329926   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:58.329941   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:58.382781   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:58.382793   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:58.382826   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:58.397360   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:58.397372   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:00.450881   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053483262s)
	I0725 16:54:02.951232   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:03.018983   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:03.050556   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.050569   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:03.050627   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:03.079230   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.079242   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:03.079298   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:03.108412   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.108425   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:03.108483   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:03.136613   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.136626   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:03.136688   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:03.165794   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.165805   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:03.165862   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:03.194455   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.194471   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:03.194539   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:03.226412   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.226426   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:03.226490   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:03.261052   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.261064   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:03.261072   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:03.261081   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:05.315384   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054277623s)
	I0725 16:54:05.315492   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:05.315500   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:05.354732   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:05.354744   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:05.366506   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:05.366519   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:05.419168   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:05.419178   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:05.419185   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:07.935013   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:08.017181   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:08.048536   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.048557   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:08.048619   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:08.080579   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.080592   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:08.080652   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:08.108274   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.108287   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:08.108346   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:08.138319   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.138331   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:08.138390   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:08.168384   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.168395   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:08.168452   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:08.198022   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.198034   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:08.198092   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:08.226920   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.226933   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:08.226991   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:08.257052   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.257063   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:08.257070   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:08.257078   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:08.268657   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:08.268690   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:08.320782   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:08.320793   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:08.320799   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:08.334711   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:08.334722   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:10.390667   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05591657s)
	I0725 16:54:10.390776   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:10.390784   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:12.930154   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:13.016938   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:13.046701   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.046713   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:13.046769   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:13.076212   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.076225   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:13.076282   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:13.106089   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.106099   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:13.106147   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:13.136688   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.136702   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:13.136762   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:13.166341   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.166353   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:13.166412   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:13.194833   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.194844   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:13.194910   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:13.223450   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.223462   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:13.223522   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:13.253571   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.253583   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:13.253590   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:13.253596   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:13.296069   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:13.296080   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:13.308497   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:13.317701   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:13.373112   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:13.373126   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:13.373135   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:13.387086   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:13.387099   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:15.443702   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056574496s)
	I0725 16:54:17.946094   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:18.019154   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:18.050260   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.050273   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:18.050335   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:18.079777   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.079789   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:18.079847   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:18.111380   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.111393   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:18.111445   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:18.143959   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.143969   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:18.144021   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:18.180312   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.180332   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:18.180399   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:18.215895   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.215911   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:18.215963   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:18.252789   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.252802   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:18.252852   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:18.290782   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.290810   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:18.290818   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:18.290847   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:18.303512   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:18.317352   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:18.376087   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:18.376098   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:18.376106   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:18.390833   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:18.390853   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:20.449118   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05823903s)
	I0725 16:54:20.449231   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:20.449238   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:22.992397   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:23.017255   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:23.045826   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.045844   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:23.045915   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:23.075162   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.075174   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:23.075229   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:23.105247   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.105260   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:23.105315   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:23.134037   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.134056   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:23.134113   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:23.163197   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.163211   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:23.163269   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:23.192645   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.192657   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:23.192714   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:23.220793   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.220804   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:23.220863   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:23.250836   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.250847   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:23.250854   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:23.250860   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:25.307612   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056726692s)
	I0725 16:54:25.307719   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:25.307726   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:25.346156   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:25.346168   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:25.358492   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:25.358504   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:25.410340   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:25.410351   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:25.410358   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:27.924097   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:28.017834   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:28.049566   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.049580   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:28.049646   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:28.079671   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.079685   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:28.079744   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:28.108629   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.108641   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:28.108696   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:28.137881   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.137893   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:28.137954   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:28.166821   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.166834   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:28.166898   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:28.196515   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.196527   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:28.196590   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:28.225959   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.225971   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:28.226028   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:28.254555   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.254567   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:28.254574   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:28.254581   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:30.308050   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053443356s)
	I0725 16:54:30.308156   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:30.308162   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:30.347803   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:30.347816   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:30.360116   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:30.360128   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:30.413675   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:30.413687   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:30.413693   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:32.929655   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:33.019242   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:33.052472   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.052485   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:33.052542   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:33.081513   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.081531   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:33.081586   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:33.112328   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.112340   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:33.112399   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:33.140741   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.140755   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:33.140820   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:33.171364   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.171382   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:33.171441   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:33.203103   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.203116   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:33.203176   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:33.233444   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.233456   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:33.233522   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:33.265044   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.265056   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:33.265063   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:33.265071   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:33.306110   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:33.317535   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:33.330969   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:33.330983   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:33.383185   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:33.383196   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:33.383205   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:33.396721   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:33.396739   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:35.470448   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.07368252s)
	I0725 16:54:37.970815   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:38.017644   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:38.047388   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.047404   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:38.047456   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:38.077940   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.077954   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:38.078049   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:38.111761   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.111773   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:38.111835   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:38.148082   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.148095   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:38.148162   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:38.180302   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.180314   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:38.180369   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:38.211612   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.211627   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:38.211690   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:38.241709   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.241720   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:38.241775   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:38.273560   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.273574   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:38.273581   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:38.273588   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:38.317933   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:38.333023   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:38.346938   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:38.346952   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:38.403505   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:38.403518   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:38.403525   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:38.421917   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:38.421930   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:40.482008   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06005199s)
	I0725 16:54:42.982265   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:43.018148   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:43.047131   30645 logs.go:274] 0 containers: []
	W0725 16:54:43.047144   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:43.047198   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:43.078001   30645 logs.go:274] 0 containers: []
	W0725 16:54:43.078014   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:43.078074   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:43.112065   30645 logs.go:274] 0 containers: []
	W0725 16:54:43.112083   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:43.112147   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:43.142919   30645 logs.go:274] 0 containers: []
	W0725 16:54:43.142951   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:43.143007   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:43.172584   30645 logs.go:274] 0 containers: []
	W0725 16:54:43.172595   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:43.172668   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:43.202449   30645 logs.go:274] 0 containers: []
	W0725 16:54:43.202461   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:43.202519   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:43.232248   30645 logs.go:274] 0 containers: []
	W0725 16:54:43.232261   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:43.232317   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:43.264952   30645 logs.go:274] 0 containers: []
	W0725 16:54:43.264965   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:43.264976   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:43.264985   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:45.320415   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055401838s)
	I0725 16:54:45.320528   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:45.320536   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:45.365669   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:45.365686   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:45.380056   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:45.380070   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:45.446304   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:45.446315   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:45.446321   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:47.963456   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:48.017622   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:48.050585   30645 logs.go:274] 0 containers: []
	W0725 16:54:48.050597   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:48.050653   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:48.081593   30645 logs.go:274] 0 containers: []
	W0725 16:54:48.081626   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:48.081682   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:48.112218   30645 logs.go:274] 0 containers: []
	W0725 16:54:48.112231   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:48.112292   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:48.142856   30645 logs.go:274] 0 containers: []
	W0725 16:54:48.142889   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:48.142949   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:48.179028   30645 logs.go:274] 0 containers: []
	W0725 16:54:48.179040   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:48.179100   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:48.217277   30645 logs.go:274] 0 containers: []
	W0725 16:54:48.217288   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:48.217335   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:48.263659   30645 logs.go:274] 0 containers: []
	W0725 16:54:48.263675   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:48.263751   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:48.297025   30645 logs.go:274] 0 containers: []
	W0725 16:54:48.316515   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:48.316526   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:48.316533   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:48.332965   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:48.332980   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:50.390555   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057546872s)
	I0725 16:54:50.390667   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:50.390674   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:50.437437   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:50.437458   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:50.453737   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:50.453767   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:50.524477   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:53.026419   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:53.517843   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:53.556866   30645 logs.go:274] 0 containers: []
	W0725 16:54:53.556879   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:53.556937   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:53.594076   30645 logs.go:274] 0 containers: []
	W0725 16:54:53.594089   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:53.594167   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:53.649071   30645 logs.go:274] 0 containers: []
	W0725 16:54:53.649085   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:53.649152   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:53.694148   30645 logs.go:274] 0 containers: []
	W0725 16:54:53.694160   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:53.694219   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:53.736073   30645 logs.go:274] 0 containers: []
	W0725 16:54:53.736087   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:53.736151   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:53.773120   30645 logs.go:274] 0 containers: []
	W0725 16:54:53.773132   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:53.773191   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:53.821573   30645 logs.go:274] 0 containers: []
	W0725 16:54:53.821587   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:53.821650   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:53.858760   30645 logs.go:274] 0 containers: []
	W0725 16:54:53.858772   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:53.858779   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:53.858786   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:53.909168   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:53.909188   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:53.926441   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:53.926459   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:54.022643   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:54.022662   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:54.022674   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:54.037711   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:54.037727   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:56.090894   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053141376s)
	I0725 16:54:58.592921   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:59.017440   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:59.051654   30645 logs.go:274] 0 containers: []
	W0725 16:54:59.051681   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:59.051797   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:59.091235   30645 logs.go:274] 0 containers: []
	W0725 16:54:59.091249   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:59.091312   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:59.122373   30645 logs.go:274] 0 containers: []
	W0725 16:54:59.122386   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:59.122453   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:59.156518   30645 logs.go:274] 0 containers: []
	W0725 16:54:59.156531   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:59.156591   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:59.186424   30645 logs.go:274] 0 containers: []
	W0725 16:54:59.186436   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:59.186500   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:59.214474   30645 logs.go:274] 0 containers: []
	W0725 16:54:59.214486   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:59.214547   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:59.245514   30645 logs.go:274] 0 containers: []
	W0725 16:54:59.245529   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:59.245593   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:59.277412   30645 logs.go:274] 0 containers: []
	W0725 16:54:59.277424   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:59.277431   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:59.277438   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:59.289988   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:59.290004   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:59.350021   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:59.350045   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:59.350052   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:59.365792   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:59.365806   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:01.425854   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060021979s)
	I0725 16:55:01.425991   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:01.425998   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:03.971019   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:04.017827   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:04.047787   30645 logs.go:274] 0 containers: []
	W0725 16:55:04.047800   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:04.047860   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:04.078712   30645 logs.go:274] 0 containers: []
	W0725 16:55:04.078726   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:04.078794   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:04.111185   30645 logs.go:274] 0 containers: []
	W0725 16:55:04.111200   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:04.111268   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:04.143503   30645 logs.go:274] 0 containers: []
	W0725 16:55:04.143516   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:04.143576   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:04.175582   30645 logs.go:274] 0 containers: []
	W0725 16:55:04.175597   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:04.175672   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:04.208469   30645 logs.go:274] 0 containers: []
	W0725 16:55:04.208482   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:04.208551   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:04.243258   30645 logs.go:274] 0 containers: []
	W0725 16:55:04.243271   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:04.243332   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:04.274361   30645 logs.go:274] 0 containers: []
	W0725 16:55:04.274375   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:04.274381   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:04.274388   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:04.289669   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:04.289682   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:06.344108   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054400313s)
	I0725 16:55:06.344216   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:06.344224   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:06.387163   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:06.387181   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:06.399018   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:06.399031   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:06.453025   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:08.955309   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:09.017633   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:09.048671   30645 logs.go:274] 0 containers: []
	W0725 16:55:09.048682   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:09.048746   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:09.078085   30645 logs.go:274] 0 containers: []
	W0725 16:55:09.078109   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:09.078171   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:09.107136   30645 logs.go:274] 0 containers: []
	W0725 16:55:09.107149   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:09.107214   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:09.137534   30645 logs.go:274] 0 containers: []
	W0725 16:55:09.137561   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:09.137622   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:09.172756   30645 logs.go:274] 0 containers: []
	W0725 16:55:09.172769   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:09.172827   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:09.205889   30645 logs.go:274] 0 containers: []
	W0725 16:55:09.205901   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:09.205962   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:09.242016   30645 logs.go:274] 0 containers: []
	W0725 16:55:09.242031   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:09.242094   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:09.274280   30645 logs.go:274] 0 containers: []
	W0725 16:55:09.274294   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:09.274301   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:09.274307   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:09.290075   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:09.290087   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:11.342952   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052837688s)
	I0725 16:55:11.343077   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:11.343086   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:11.389654   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:11.389677   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:11.404643   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:11.404660   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:11.503347   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:14.005782   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:14.017442   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:14.045654   30645 logs.go:274] 0 containers: []
	W0725 16:55:14.045665   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:14.045724   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:14.074758   30645 logs.go:274] 0 containers: []
	W0725 16:55:14.074770   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:14.074831   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:14.102855   30645 logs.go:274] 0 containers: []
	W0725 16:55:14.102868   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:14.102927   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:14.133301   30645 logs.go:274] 0 containers: []
	W0725 16:55:14.133314   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:14.133381   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:14.164346   30645 logs.go:274] 0 containers: []
	W0725 16:55:14.164358   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:14.164416   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:14.194833   30645 logs.go:274] 0 containers: []
	W0725 16:55:14.194847   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:14.194912   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:14.225678   30645 logs.go:274] 0 containers: []
	W0725 16:55:14.225707   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:14.225779   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:14.259670   30645 logs.go:274] 0 containers: []
	W0725 16:55:14.259683   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:14.259693   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:14.259707   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:14.327824   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:14.327835   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:14.327843   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:14.344492   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:14.344507   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:16.410144   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065610246s)
	I0725 16:55:16.410256   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:16.410263   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:16.456411   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:16.456435   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:18.971711   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:19.017511   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:19.048078   30645 logs.go:274] 0 containers: []
	W0725 16:55:19.048090   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:19.048146   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:19.076617   30645 logs.go:274] 0 containers: []
	W0725 16:55:19.076630   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:19.076689   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:19.105952   30645 logs.go:274] 0 containers: []
	W0725 16:55:19.105965   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:19.106027   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:19.135836   30645 logs.go:274] 0 containers: []
	W0725 16:55:19.135847   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:19.135903   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:19.164866   30645 logs.go:274] 0 containers: []
	W0725 16:55:19.164879   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:19.164936   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:19.193599   30645 logs.go:274] 0 containers: []
	W0725 16:55:19.193611   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:19.193670   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:19.223120   30645 logs.go:274] 0 containers: []
	W0725 16:55:19.223132   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:19.223188   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:19.251842   30645 logs.go:274] 0 containers: []
	W0725 16:55:19.251856   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:19.251863   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:19.251870   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:19.263508   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:19.263519   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:19.318965   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:19.318976   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:19.318984   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:19.335701   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:19.335716   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:21.391900   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056158047s)
	I0725 16:55:21.392007   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:21.392014   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:23.931698   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:24.019504   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:24.052917   30645 logs.go:274] 0 containers: []
	W0725 16:55:24.052931   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:24.052994   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:24.080548   30645 logs.go:274] 0 containers: []
	W0725 16:55:24.080560   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:24.080623   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:24.109434   30645 logs.go:274] 0 containers: []
	W0725 16:55:24.109447   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:24.109505   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:24.141869   30645 logs.go:274] 0 containers: []
	W0725 16:55:24.141881   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:24.141944   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:24.173994   30645 logs.go:274] 0 containers: []
	W0725 16:55:24.174007   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:24.174067   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:24.204419   30645 logs.go:274] 0 containers: []
	W0725 16:55:24.204430   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:24.204493   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:24.234105   30645 logs.go:274] 0 containers: []
	W0725 16:55:24.234118   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:24.234182   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:24.263529   30645 logs.go:274] 0 containers: []
	W0725 16:55:24.263542   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:24.263551   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:24.263558   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:24.304650   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:24.304662   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:24.316840   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:24.316855   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:24.376593   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:24.376606   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:24.376615   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:24.390593   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:24.390606   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:26.439734   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049100568s)
	I0725 16:55:28.941073   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:29.019502   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:29.049749   30645 logs.go:274] 0 containers: []
	W0725 16:55:29.049765   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:29.049832   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:29.078455   30645 logs.go:274] 0 containers: []
	W0725 16:55:29.078468   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:29.078526   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:29.110653   30645 logs.go:274] 0 containers: []
	W0725 16:55:29.110667   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:29.110725   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:29.142622   30645 logs.go:274] 0 containers: []
	W0725 16:55:29.142639   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:29.142706   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:29.181558   30645 logs.go:274] 0 containers: []
	W0725 16:55:29.181595   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:29.181653   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:29.209685   30645 logs.go:274] 0 containers: []
	W0725 16:55:29.209699   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:29.209754   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:29.241554   30645 logs.go:274] 0 containers: []
	W0725 16:55:29.241572   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:29.241642   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:29.295307   30645 logs.go:274] 0 containers: []
	W0725 16:55:29.295319   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:29.295326   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:29.295332   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:31.350758   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055399465s)
	I0725 16:55:31.350865   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:31.350872   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:31.390124   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:31.390139   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:31.402536   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:31.402550   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:31.456661   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:31.456671   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:31.456679   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:33.971293   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:34.017463   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:34.046271   30645 logs.go:274] 0 containers: []
	W0725 16:55:34.046284   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:34.046344   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:34.074103   30645 logs.go:274] 0 containers: []
	W0725 16:55:34.074116   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:34.074173   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:34.103854   30645 logs.go:274] 0 containers: []
	W0725 16:55:34.103866   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:34.103923   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:34.134297   30645 logs.go:274] 0 containers: []
	W0725 16:55:34.134317   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:34.134379   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:34.164225   30645 logs.go:274] 0 containers: []
	W0725 16:55:34.164262   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:34.164329   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:34.195984   30645 logs.go:274] 0 containers: []
	W0725 16:55:34.195996   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:34.196054   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:34.224016   30645 logs.go:274] 0 containers: []
	W0725 16:55:34.224028   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:34.224088   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:34.252372   30645 logs.go:274] 0 containers: []
	W0725 16:55:34.252384   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:34.252391   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:34.252398   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:34.263924   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:34.263937   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:34.318107   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:34.318124   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:34.318135   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:34.334017   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:34.334029   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:36.385285   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051230176s)
	I0725 16:55:36.385412   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:36.385419   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:38.929624   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:39.017566   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:39.049023   30645 logs.go:274] 0 containers: []
	W0725 16:55:39.049035   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:39.049094   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:39.078972   30645 logs.go:274] 0 containers: []
	W0725 16:55:39.078985   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:39.079043   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:39.114902   30645 logs.go:274] 0 containers: []
	W0725 16:55:39.114912   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:39.114962   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:39.149050   30645 logs.go:274] 0 containers: []
	W0725 16:55:39.149063   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:39.149125   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:39.177716   30645 logs.go:274] 0 containers: []
	W0725 16:55:39.177729   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:39.177775   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:39.207667   30645 logs.go:274] 0 containers: []
	W0725 16:55:39.207684   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:39.207743   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:39.237684   30645 logs.go:274] 0 containers: []
	W0725 16:55:39.237720   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:39.237795   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:39.266626   30645 logs.go:274] 0 containers: []
	W0725 16:55:39.266639   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:39.266647   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:39.266654   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:39.321002   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:39.321015   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:39.321021   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:39.334788   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:39.334801   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:41.390209   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055380997s)
	I0725 16:55:41.390320   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:41.390327   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:41.430557   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:41.430572   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:43.942914   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:44.019650   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:44.051465   30645 logs.go:274] 0 containers: []
	W0725 16:55:44.051478   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:44.051536   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:44.080926   30645 logs.go:274] 0 containers: []
	W0725 16:55:44.080960   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:44.081057   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:44.109896   30645 logs.go:274] 0 containers: []
	W0725 16:55:44.109907   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:44.109964   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:44.139612   30645 logs.go:274] 0 containers: []
	W0725 16:55:44.139624   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:44.139682   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:44.170338   30645 logs.go:274] 0 containers: []
	W0725 16:55:44.170350   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:44.170411   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:44.199489   30645 logs.go:274] 0 containers: []
	W0725 16:55:44.199501   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:44.199623   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:44.229034   30645 logs.go:274] 0 containers: []
	W0725 16:55:44.229045   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:44.229143   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:44.259505   30645 logs.go:274] 0 containers: []
	W0725 16:55:44.259519   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:44.259527   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:44.259535   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:44.273202   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:44.273215   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:46.326687   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053445162s)
	I0725 16:55:46.326794   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:46.326800   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:46.366887   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:46.366902   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:46.378337   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:46.378350   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:46.433420   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:48.935783   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:49.019687   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:49.050761   30645 logs.go:274] 0 containers: []
	W0725 16:55:49.050774   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:49.050834   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:49.082424   30645 logs.go:274] 0 containers: []
	W0725 16:55:49.082435   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:49.082496   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:49.112494   30645 logs.go:274] 0 containers: []
	W0725 16:55:49.112505   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:49.112569   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:49.144108   30645 logs.go:274] 0 containers: []
	W0725 16:55:49.144122   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:49.144185   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:49.175650   30645 logs.go:274] 0 containers: []
	W0725 16:55:49.175661   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:49.175724   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:49.207879   30645 logs.go:274] 0 containers: []
	W0725 16:55:49.207892   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:49.207949   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:49.237405   30645 logs.go:274] 0 containers: []
	W0725 16:55:49.237417   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:49.237471   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:49.268368   30645 logs.go:274] 0 containers: []
	W0725 16:55:49.268381   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:49.268389   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:49.268396   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:49.310450   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:49.310464   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:49.322511   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:49.322526   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:49.377712   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:49.377724   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:49.377732   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:49.391750   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:49.391762   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:51.443275   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051485333s)
	I0725 16:55:53.945780   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:54.019737   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:54.050021   30645 logs.go:274] 0 containers: []
	W0725 16:55:54.050033   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:54.050097   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:54.079371   30645 logs.go:274] 0 containers: []
	W0725 16:55:54.079383   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:54.079443   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:54.107590   30645 logs.go:274] 0 containers: []
	W0725 16:55:54.107603   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:54.107664   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:54.137095   30645 logs.go:274] 0 containers: []
	W0725 16:55:54.137107   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:54.137166   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:54.166718   30645 logs.go:274] 0 containers: []
	W0725 16:55:54.166730   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:54.166792   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:54.196706   30645 logs.go:274] 0 containers: []
	W0725 16:55:54.196716   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:54.196776   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:54.225666   30645 logs.go:274] 0 containers: []
	W0725 16:55:54.225678   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:54.225772   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:54.255581   30645 logs.go:274] 0 containers: []
	W0725 16:55:54.255594   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:54.255601   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:54.255608   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:54.270515   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:54.270527   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:55:56.328940   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058387517s)
	I0725 16:55:56.329046   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:55:56.329053   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:55:56.371104   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:55:56.371117   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:55:56.383410   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:56.383430   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:56.437577   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:58.938054   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:55:59.017824   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:55:59.049437   30645 logs.go:274] 0 containers: []
	W0725 16:55:59.049449   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:55:59.049507   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:55:59.077588   30645 logs.go:274] 0 containers: []
	W0725 16:55:59.077600   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:55:59.077661   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:55:59.106968   30645 logs.go:274] 0 containers: []
	W0725 16:55:59.106980   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:55:59.107036   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:55:59.136445   30645 logs.go:274] 0 containers: []
	W0725 16:55:59.136459   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:55:59.136529   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:55:59.165709   30645 logs.go:274] 0 containers: []
	W0725 16:55:59.165722   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:55:59.165786   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:55:59.193626   30645 logs.go:274] 0 containers: []
	W0725 16:55:59.193638   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:55:59.193697   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:55:59.222312   30645 logs.go:274] 0 containers: []
	W0725 16:55:59.222324   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:55:59.222390   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:55:59.251956   30645 logs.go:274] 0 containers: []
	W0725 16:55:59.251970   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:55:59.251977   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:55:59.251983   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:55:59.306128   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:55:59.306139   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:55:59.306145   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:55:59.320530   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:55:59.320543   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:56:01.376276   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055706644s)
	I0725 16:56:01.376382   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:56:01.376389   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:56:01.416958   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:56:01.416970   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:56:03.930815   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:03.940063   30645 kubeadm.go:630] restartCluster took 4m5.611815756s
	W0725 16:56:03.940157   30645 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0725 16:56:03.940174   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:56:04.371868   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:56:04.382270   30645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:04.391315   30645 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:56:04.391409   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:56:04.400006   30645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:56:04.400035   30645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:56:05.304425   30645 out.go:204]   - Generating certificates and keys ...
	I0725 16:56:05.808767   30645 out.go:204]   - Booting up control plane ...
	W0725 16:58:00.726845   30645 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 16:58:00.726876   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:58:01.152676   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:58:01.162348   30645 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:58:01.162398   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:58:01.169739   30645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:58:01.169757   30645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:58:01.932563   30645 out.go:204]   - Generating certificates and keys ...
	I0725 16:58:02.879021   30645 out.go:204]   - Booting up control plane ...
	I0725 16:59:57.797952   30645 kubeadm.go:397] StartCluster complete in 7m59.508645122s
	I0725 16:59:57.798033   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:59:57.827359   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.827371   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:59:57.827433   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:59:57.857686   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.857699   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:59:57.857755   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:59:57.887067   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.887079   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:59:57.887137   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:59:57.916980   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.916992   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:59:57.917054   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:59:57.946633   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.946646   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:59:57.946705   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:59:57.976302   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.976314   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:59:57.976371   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:59:58.006163   30645 logs.go:274] 0 containers: []
	W0725 16:59:58.006175   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:59:58.006233   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:59:58.034791   30645 logs.go:274] 0 containers: []
	W0725 16:59:58.034803   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:59:58.034811   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:59:58.034818   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:59:58.075762   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:59:58.075777   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:59:58.087641   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:59:58.087653   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:59:58.142043   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:59:58.142055   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:59:58.142062   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:59:58.156155   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:59:58.156167   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 17:00:00.209432   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053238365s)
	W0725 17:00:00.209581   30645 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 17:00:00.209596   30645 out.go:239] * 
	* 
	W0725 17:00:00.209762   30645 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 17:00:00.209776   30645 out.go:239] * 
	* 
	W0725 17:00:00.210311   30645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 17:00:00.272919   30645 out.go:177] 
	W0725 17:00:00.315153   30645 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 17:00:00.315316   30645 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 17:00:00.315414   30645 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 17:00:00.372884   30645 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220725164610-14919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725164610-14919
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725164610-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf",
	        "Created": "2022-07-25T23:46:16.38043483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:51:54.648798687Z",
	            "FinishedAt": "2022-07-25T23:51:51.718201115Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hosts",
	        "LogPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf-json.log",
	        "Name": "/old-k8s-version-20220725164610-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725164610-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725164610-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725164610-14919",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725164610-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725164610-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1e8c374f85bd4349655b5dfcfe823620a484a31bb6415a2e0b8632dd020452f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50823"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50824"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50825"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50826"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50822"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1e8c374f85b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725164610-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3e45dea9c014",
	                        "old-k8s-version-20220725164610-14919"
	                    ],
	                    "NetworkID": "cc2155f0f89448c4255b6f474f0b34c64b5460d3acc5441984909bacee63d7d6",
	                    "EndpointID": "aa5034ea8648431be616c4e8025677bb27e250d86bdb70415b75ae2f6083245f",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 2 (474.317089ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220725164610-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220725164610-14919 logs -n 25: (3.792175897s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT | 25 Jul 22 16:46 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:47 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:53 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:50 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT |                     |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 16:56:03
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 16:56:03.433534   31337 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:56:03.433731   31337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:56:03.433737   31337 out.go:309] Setting ErrFile to fd 2...
	I0725 16:56:03.433741   31337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:56:03.433881   31337 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:56:03.434424   31337 out.go:303] Setting JSON to false
	I0725 16:56:03.449478   31337 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10286,"bootTime":1658783077,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:56:03.449569   31337 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:56:03.471556   31337 out.go:177] * [embed-certs-20220725165448-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:56:03.515487   31337 notify.go:193] Checking for updates...
	I0725 16:56:03.537285   31337 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:56:03.559095   31337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:56:03.580425   31337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:56:03.602303   31337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:56:03.625261   31337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:56:03.646919   31337 config.go:178] Loaded profile config "embed-certs-20220725165448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:56:03.647548   31337 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:56:03.716719   31337 docker.go:137] docker version: linux-20.10.17
	I0725 16:56:03.716857   31337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:56:03.850783   31337 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:56:03.793505502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:56:03.871916   31337 out.go:177] * Using the docker driver based on existing profile
	I0725 16:56:03.893953   31337 start.go:284] selected driver: docker
	I0725 16:56:03.893988   31337 start.go:808] validating driver "docker" against &{Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:03.894188   31337 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:56:03.897532   31337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:56:04.045703   31337 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:56:03.982785914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:56:04.045859   31337 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:56:04.045875   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:04.045886   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:04.045899   31337 start_flags.go:310] config:
	{Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:04.088356   31337 out.go:177] * Starting control plane node embed-certs-20220725165448-14919 in cluster embed-certs-20220725165448-14919
	I0725 16:56:04.109451   31337 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:56:04.130134   31337 out.go:177] * Pulling base image ...
	I0725 16:56:04.172375   31337 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:56:04.172376   31337 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:56:04.172427   31337 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 16:56:04.172439   31337 cache.go:57] Caching tarball of preloaded images
	I0725 16:56:04.172566   31337 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:56:04.172579   31337 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 16:56:04.173197   31337 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/config.json ...
	I0725 16:56:04.236416   31337 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:56:04.236434   31337 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:56:04.236446   31337 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:56:04.236526   31337 start.go:370] acquiring machines lock for embed-certs-20220725165448-14919: {Name:mkbc95d1eab1ca3410e49bf2a4e793a24fb963ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:56:04.236618   31337 start.go:374] acquired machines lock for "embed-certs-20220725165448-14919" in 73.505µs
	I0725 16:56:04.236655   31337 start.go:95] Skipping create...Using existing machine configuration
	I0725 16:56:04.236666   31337 fix.go:55] fixHost starting: 
	I0725 16:56:04.236886   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 16:56:04.304136   31337 fix.go:103] recreateIfNeeded on embed-certs-20220725165448-14919: state=Stopped err=<nil>
	W0725 16:56:04.304166   31337 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 16:56:04.346631   31337 out.go:177] * Restarting existing docker container for "embed-certs-20220725165448-14919" ...
	I0725 16:56:03.930815   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:03.940063   30645 kubeadm.go:630] restartCluster took 4m5.611815756s
	W0725 16:56:03.940157   30645 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0725 16:56:03.940174   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:56:04.371868   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:56:04.382270   30645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:04.391315   30645 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:56:04.391409   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:56:04.400006   30645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:56:04.400035   30645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:56:05.304425   30645 out.go:204]   - Generating certificates and keys ...
	I0725 16:56:04.367742   31337 cli_runner.go:164] Run: docker start embed-certs-20220725165448-14919
	I0725 16:56:04.744066   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 16:56:04.827385   31337 kic.go:415] container "embed-certs-20220725165448-14919" state is running.
	I0725 16:56:04.828035   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:04.912426   31337 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/config.json ...
	I0725 16:56:04.912942   31337 machine.go:88] provisioning docker machine ...
	I0725 16:56:04.912971   31337 ubuntu.go:169] provisioning hostname "embed-certs-20220725165448-14919"
	I0725 16:56:04.913056   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:04.999598   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:04.999819   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:04.999838   31337 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220725165448-14919 && echo "embed-certs-20220725165448-14919" | sudo tee /etc/hostname
	I0725 16:56:05.137366   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220725165448-14919
	
	I0725 16:56:05.137451   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.224934   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:05.225280   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:05.225297   31337 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220725165448-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220725165448-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220725165448-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:56:05.351826   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:56:05.351845   31337 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:56:05.351871   31337 ubuntu.go:177] setting up certificates
	I0725 16:56:05.351880   31337 provision.go:83] configureAuth start
	I0725 16:56:05.351957   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:05.433243   31337 provision.go:138] copyHostCerts
	I0725 16:56:05.433345   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:56:05.433355   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:56:05.433478   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:56:05.433791   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:56:05.433801   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:56:05.433872   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:56:05.434037   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:56:05.434043   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:56:05.434112   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:56:05.434245   31337 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220725165448-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220725165448-14919]
	I0725 16:56:05.543085   31337 provision.go:172] copyRemoteCerts
	I0725 16:56:05.543159   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:56:05.543212   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.626756   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:05.718355   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:56:05.738285   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0725 16:56:05.769330   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 16:56:05.792698   31337 provision.go:86] duration metric: configureAuth took 440.796611ms
	I0725 16:56:05.792721   31337 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:56:05.792935   31337 config.go:178] Loaded profile config "embed-certs-20220725165448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:56:05.793007   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.872213   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:05.872420   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:05.872432   31337 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:56:05.994661   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:56:05.994679   31337 ubuntu.go:71] root file system type: overlay
	I0725 16:56:05.994840   31337 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:56:05.994916   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.071541   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:06.071747   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:06.071803   31337 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:56:06.201902   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:56:06.201994   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.274921   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:06.275076   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:06.275096   31337 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:56:06.403965   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:56:06.403988   31337 machine.go:91] provisioned docker machine in 1.491027379s
	I0725 16:56:06.404000   31337 start.go:307] post-start starting for "embed-certs-20220725165448-14919" (driver="docker")
	I0725 16:56:06.404006   31337 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:56:06.404073   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:56:06.404133   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.476046   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.566386   31337 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:56:06.569878   31337 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:56:06.569892   31337 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:56:06.569898   31337 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:56:06.569903   31337 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:56:06.569913   31337 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:56:06.570034   31337 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:56:06.570192   31337 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:56:06.570362   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:56:06.577828   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:56:06.594791   31337 start.go:310] post-start completed in 190.779597ms
	I0725 16:56:06.594866   31337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:56:06.594916   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.669069   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.756422   31337 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:56:06.761025   31337 fix.go:57] fixHost completed within 2.524342859s
	I0725 16:56:06.761037   31337 start.go:82] releasing machines lock for "embed-certs-20220725165448-14919", held for 2.524394197s
	I0725 16:56:06.761113   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:06.833722   31337 ssh_runner.go:195] Run: systemctl --version
	I0725 16:56:06.833735   31337 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:56:06.833788   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.833798   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.913090   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.916204   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.999674   31337 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:56:07.221803   31337 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:56:07.221878   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:56:07.233712   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:56:07.246547   31337 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:56:07.308561   31337 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:56:07.377049   31337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:56:07.439815   31337 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:56:07.676316   31337 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 16:56:07.755611   31337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:56:07.831651   31337 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 16:56:07.841040   31337 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 16:56:07.841101   31337 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 16:56:07.846451   31337 start.go:471] Will wait 60s for crictl version
	I0725 16:56:07.846501   31337 ssh_runner.go:195] Run: sudo crictl version
	I0725 16:56:07.944939   31337 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 16:56:07.945009   31337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:56:07.979201   31337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:56:05.808767   30645 out.go:204]   - Booting up control plane ...
	I0725 16:56:08.057107   31337 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 16:56:08.057277   31337 cli_runner.go:164] Run: docker exec -t embed-certs-20220725165448-14919 dig +short host.docker.internal
	I0725 16:56:08.186719   31337 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:56:08.186830   31337 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:56:08.191311   31337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:56:08.201156   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:08.275039   31337 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:56:08.275116   31337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:56:08.304877   31337 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 16:56:08.304899   31337 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:56:08.304983   31337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:56:08.336195   31337 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 16:56:08.336253   31337 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:56:08.336397   31337 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:56:08.409222   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:08.409235   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:08.409251   31337 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:56:08.409279   31337 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220725165448-14919 NodeName:embed-certs-20220725165448-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:56:08.409450   31337 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220725165448-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:56:08.409534   31337 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220725165448-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:56:08.409594   31337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 16:56:08.417474   31337 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:56:08.417537   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:56:08.424560   31337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0725 16:56:08.437566   31337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:56:08.468744   31337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0725 16:56:08.481183   31337 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:56:08.484973   31337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:56:08.494671   31337 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919 for IP: 192.168.76.2
	I0725 16:56:08.494789   31337 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:56:08.494855   31337 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:56:08.495018   31337 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/client.key
	I0725 16:56:08.495092   31337 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.key.31bdca25
	I0725 16:56:08.495177   31337 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.key
	I0725 16:56:08.495477   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:56:08.495545   31337 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:56:08.495559   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:56:08.495593   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:56:08.495624   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:56:08.495653   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:56:08.495726   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:56:08.496246   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:56:08.513745   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 16:56:08.531066   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:56:08.548205   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:56:08.566013   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:56:08.582490   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:56:08.599475   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:56:08.616680   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:56:08.633438   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:56:08.650322   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:56:08.667527   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:56:08.684813   31337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:56:08.697928   31337 ssh_runner.go:195] Run: openssl version
	I0725 16:56:08.703211   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:56:08.710894   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.714829   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.714882   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.719947   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:56:08.728099   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:56:08.736150   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.740028   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.740070   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.745643   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:56:08.752922   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:56:08.760821   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.765131   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.765176   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.770300   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:56:08.777357   31337 kubeadm.go:395] StartCluster: {Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:08.777464   31337 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:56:08.807200   31337 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:56:08.814843   31337 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 16:56:08.814862   31337 kubeadm.go:626] restartCluster start
	I0725 16:56:08.814913   31337 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 16:56:08.821469   31337 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:08.821534   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:08.897952   31337 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220725165448-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:56:08.898156   31337 kubeconfig.go:127] "embed-certs-20220725165448-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 16:56:08.898466   31337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:56:08.899825   31337 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 16:56:08.907910   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:08.907973   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:08.916840   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.118655   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.118753   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.129281   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.319023   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.319249   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.330056   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.517396   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.517539   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.528246   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.719033   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.719162   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.729548   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.919025   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.919173   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.929719   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.119141   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.119244   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.129805   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.318229   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.318452   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.328587   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.519054   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.519263   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.530051   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.719032   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.719238   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.729880   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.919240   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.919342   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.929774   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.117018   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.117113   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.126575   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.317191   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.317355   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.328052   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.519054   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.519269   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.529681   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.718964   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.719135   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.729819   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.917205   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.917274   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.925970   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.925980   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.926026   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.934283   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.934294   31337 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 16:56:11.934304   31337 kubeadm.go:1092] stopping kube-system containers ...
	I0725 16:56:11.934365   31337 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:56:11.964872   31337 docker.go:443] Stopping containers: [9a167f413b73 c2c372481520 fa18253e55a4 b4b22c2bf1f2 bd98a2b23e46 aae50f7a8dff 751586c3bb9b 8e494f6ee1bf 7d251a39f801 c3027cf7039f ed3d81f7d6d9 225d3bf16e2b 98c148ba1de9 fead1519fc44 f1baffe473a6 4f47378a827e]
	I0725 16:56:11.964950   31337 ssh_runner.go:195] Run: docker stop 9a167f413b73 c2c372481520 fa18253e55a4 b4b22c2bf1f2 bd98a2b23e46 aae50f7a8dff 751586c3bb9b 8e494f6ee1bf 7d251a39f801 c3027cf7039f ed3d81f7d6d9 225d3bf16e2b 98c148ba1de9 fead1519fc44 f1baffe473a6 4f47378a827e
	I0725 16:56:11.994922   31337 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 16:56:12.005330   31337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:56:12.013063   31337 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 25 23:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 25 23:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 25 23:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 25 23:55 /etc/kubernetes/scheduler.conf
	
	I0725 16:56:12.013113   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 16:56:12.020769   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 16:56:12.028247   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 16:56:12.035399   31337 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:12.035447   31337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 16:56:12.042273   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 16:56:12.049752   31337 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:12.049803   31337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 16:56:12.056784   31337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:12.064194   31337 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:12.064205   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:12.110551   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:12.991729   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.176129   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.230499   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.306926   31337 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:56:13.306998   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:13.818325   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.316810   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.816722   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.832982   31337 api_server.go:71] duration metric: took 1.526047531s to wait for apiserver process to appear ...
	I0725 16:56:14.833006   31337 api_server.go:87] waiting for apiserver healthz status ...
	I0725 16:56:14.833021   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:17.439565   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 16:56:17.439586   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 16:56:17.940421   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:17.947568   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 16:56:17.947582   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:56:18.439749   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:18.460813   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 16:56:18.460830   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:56:18.939728   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:18.948093   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 200:
	ok
	I0725 16:56:18.957429   31337 api_server.go:140] control plane version: v1.24.3
	I0725 16:56:18.957444   31337 api_server.go:130] duration metric: took 4.124403291s to wait for apiserver health ...
	I0725 16:56:18.957449   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:18.957455   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:18.957467   31337 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 16:56:18.966151   31337 system_pods.go:59] 8 kube-system pods found
	I0725 16:56:18.966170   31337 system_pods.go:61] "coredns-6d4b75cb6d-brjzw" [7a073b93-7d6d-41af-bbc5-b6bb4ba61b61] Running
	I0725 16:56:18.966174   31337 system_pods.go:61] "etcd-embed-certs-20220725165448-14919" [35f46355-a412-4e3a-9e75-41fb9d357be2] Running
	I0725 16:56:18.966180   31337 system_pods.go:61] "kube-apiserver-embed-certs-20220725165448-14919" [b920b524-5ee8-47c8-ab93-078997c96a9d] Running
	I0725 16:56:18.966184   31337 system_pods.go:61] "kube-controller-manager-embed-certs-20220725165448-14919" [6bd916cf-3e22-4a72-8eea-ad9fc77fcdac] Running
	I0725 16:56:18.966190   31337 system_pods.go:61] "kube-proxy-qz466" [2436156a-42df-4487-bbf0-3723eaaefdfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 16:56:18.966197   31337 system_pods.go:61] "kube-scheduler-embed-certs-20220725165448-14919" [d4172f18-e47e-434b-aef2-c0c9dbab78d5] Running
	I0725 16:56:18.966205   31337 system_pods.go:61] "metrics-server-5c6f97fb75-dvwxz" [4be1f012-c669-4285-8fce-b98e892d097f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 16:56:18.966226   31337 system_pods.go:61] "storage-provisioner" [9a9f14a2-6357-4e11-9e55-238e2bc5349d] Running
	I0725 16:56:18.966241   31337 system_pods.go:74] duration metric: took 8.767149ms to wait for pod list to return data ...
	I0725 16:56:18.966251   31337 node_conditions.go:102] verifying NodePressure condition ...
	I0725 16:56:18.969371   31337 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 16:56:18.969384   31337 node_conditions.go:123] node cpu capacity is 6
	I0725 16:56:18.969392   31337 node_conditions.go:105] duration metric: took 3.137023ms to run NodePressure ...
	I0725 16:56:18.969403   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:19.130505   31337 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 16:56:19.134987   31337 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0725 16:56:19.418291   31337 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0725 16:56:19.990680   31337 kubeadm.go:777] kubelet initialised
	I0725 16:56:19.990692   31337 kubeadm.go:778] duration metric: took 860.168437ms waiting for restarted kubelet to initialise ...
	I0725 16:56:19.990701   31337 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:56:19.997037   31337 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:20.006432   31337 pod_ready.go:92] pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:20.006441   31337 pod_ready.go:81] duration metric: took 9.369186ms waiting for pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:20.006448   31337 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:22.022967   31337 pod_ready.go:102] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:24.520791   31337 pod_ready.go:102] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:26.521281   31337 pod_ready.go:92] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:26.521294   31337 pod_ready.go:81] duration metric: took 6.514796336s waiting for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:26.521301   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.033931   31337 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.033944   31337 pod_ready.go:81] duration metric: took 512.6349ms waiting for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.033950   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.038066   31337 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.038074   31337 pod_ready.go:81] duration metric: took 4.11923ms waiting for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.038079   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qz466" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.042382   31337 pod_ready.go:92] pod "kube-proxy-qz466" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.042391   31337 pod_ready.go:81] duration metric: took 4.306864ms waiting for pod "kube-proxy-qz466" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.042397   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:29.054332   31337 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:31.553231   31337 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:33.054275   31337 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:33.054288   31337 pod_ready.go:81] duration metric: took 6.011844144s waiting for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:33.054295   31337 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:35.064195   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:37.065735   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:39.564369   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:41.565036   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:43.566029   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:46.066803   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:48.565574   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:50.567360   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:53.064054   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:55.064766   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:57.066535   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:59.565727   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:01.567296   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:04.067915   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:06.564528   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:08.567321   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:11.064570   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:13.065974   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:15.066410   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:17.565524   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:20.064374   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:22.066550   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:24.567486   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:26.568010   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:29.064670   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:31.065977   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:33.067605   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:35.565701   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:37.566461   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:40.067424   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:42.564117   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:44.566188   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:46.567544   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:49.065322   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:51.067604   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:53.567982   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:56.064199   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:58.066495   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	W0725 16:58:00.726845   30645 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 16:58:00.726876   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:58:01.152676   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:58:01.162348   30645 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:58:01.162398   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:58:01.169739   30645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:58:01.169757   30645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:58:01.932563   30645 out.go:204]   - Generating certificates and keys ...
	I0725 16:58:02.879021   30645 out.go:204]   - Booting up control plane ...
	I0725 16:58:00.067345   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:02.565160   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:05.066397   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:07.066907   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:09.564472   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:11.565607   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:14.064290   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:16.067942   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:18.568032   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:21.065165   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:23.065894   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:25.068053   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:27.568303   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:29.569270   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:32.067312   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:34.067798   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:36.567613   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:39.065477   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:41.067979   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:43.565007   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:45.566604   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:48.064632   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:50.067874   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:52.068045   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:54.568248   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:57.065466   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:59.065588   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:01.068271   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:03.564939   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:05.567021   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:08.066080   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:10.066132   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:12.067084   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:14.068876   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:16.566420   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:19.066562   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:21.066964   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:23.565970   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:26.067272   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:28.566308   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:31.065483   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:33.566418   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:36.066933   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:38.565560   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:40.566430   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:42.569077   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:45.068908   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:47.567704   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:50.068664   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:52.069481   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:57.797952   30645 kubeadm.go:397] StartCluster complete in 7m59.508645122s
	I0725 16:59:57.798033   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:59:57.827359   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.827371   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:59:57.827433   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:59:57.857686   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.857699   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:59:57.857755   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:59:57.887067   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.887079   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:59:57.887137   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:59:57.916980   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.916992   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:59:57.917054   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:59:57.946633   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.946646   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:59:57.946705   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:59:57.976302   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.976314   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:59:57.976371   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:59:58.006163   30645 logs.go:274] 0 containers: []
	W0725 16:59:58.006175   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:59:58.006233   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:59:58.034791   30645 logs.go:274] 0 containers: []
	W0725 16:59:58.034803   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:59:58.034811   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:59:58.034818   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:59:58.075762   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:59:58.075777   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:59:58.087641   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:59:58.087653   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:59:58.142043   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:59:58.142055   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:59:58.142062   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:59:58.156155   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:59:58.156167   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:59:54.568030   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:56.569052   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:00.209432   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053238365s)
	W0725 17:00:00.209581   30645 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 17:00:00.209596   30645 out.go:239] * 
	W0725 17:00:00.209762   30645 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 17:00:00.209776   30645 out.go:239] * 
	W0725 17:00:00.210311   30645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 17:00:00.272919   30645 out.go:177] 
	W0725 17:00:00.315153   30645 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 17:00:00.315316   30645 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 17:00:00.315414   30645 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 17:00:00.372884   30645 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:51:54 UTC, end at Tue 2022-07-26 00:00:02 UTC. --
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.280625561Z" level=info msg="Processing signal 'terminated'"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.281621938Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.282179113Z" level=info msg="Daemon shutdown complete"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: docker.service: Succeeded.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.333388918Z" level=info msg="Starting up"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335280455Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335321821Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335353731Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335365331Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336739849Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336771694Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336792129Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336802010Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.340124810Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.344053927Z" level=info msg="Loading containers: start."
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.416564242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.446250062Z" level=info msg="Loading containers: done."
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.454564731Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.454620735Z" level=info msg="Daemon has completed initialization"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Started Docker Application Container Engine.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.478491259Z" level=info msg="API listen on [::]:2376"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.481408702Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-07-26T00:00:04Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:00:04 up  1:06,  0 users,  load average: 0.45, 0.75, 1.06
	Linux old-k8s-version-20220725164610-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:51:54 UTC, end at Tue 2022-07-26 00:00:04 UTC. --
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 kubelet[14430]: I0726 00:00:03.758146   14430 server.go:410] Version: v1.16.0
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 kubelet[14430]: I0726 00:00:03.758461   14430 plugins.go:100] No cloud provider specified.
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 kubelet[14430]: I0726 00:00:03.758471   14430 server.go:773] Client rotation is on, will bootstrap in background
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 kubelet[14430]: I0726 00:00:03.760220   14430 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 kubelet[14430]: W0726 00:00:03.761287   14430 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 kubelet[14430]: W0726 00:00:03.761418   14430 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 kubelet[14430]: F0726 00:00:03.761473   14430 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 26 00:00:03 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 163.
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 kubelet[14463]: I0726 00:00:04.519129   14463 server.go:410] Version: v1.16.0
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 kubelet[14463]: I0726 00:00:04.519599   14463 plugins.go:100] No cloud provider specified.
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 kubelet[14463]: I0726 00:00:04.519666   14463 server.go:773] Client rotation is on, will bootstrap in background
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 kubelet[14463]: I0726 00:00:04.521530   14463 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 kubelet[14463]: W0726 00:00:04.523344   14463 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 kubelet[14463]: W0726 00:00:04.523458   14463 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 kubelet[14463]: F0726 00:00:04.523531   14463 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 26 00:00:04 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 17:00:04.454816   31669 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 2 (454.771321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220725164610-14919" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (492.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (43.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220725164719-14919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
E0725 16:54:08.923641   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919: exit status 2 (16.179463802s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919: exit status 2 (16.144644978s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220725164719-14919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-darwin-amd64 unpause -p no-preload-20220725164719-14919 --alsologtostderr -v=1: (1.026850749s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220725164719-14919
helpers_test.go:235: (dbg) docker inspect no-preload-20220725164719-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2",
	        "Created": "2022-07-25T23:47:21.494738173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235566,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:48:41.371128777Z",
	            "FinishedAt": "2022-07-25T23:48:39.424396366Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2/hosts",
	        "LogPath": "/var/lib/docker/containers/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2-json.log",
	        "Name": "/no-preload-20220725164719-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220725164719-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220725164719-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5579c431d7f5e88eed0b2c9884c4b6e7591fa8d54b9274b5fe9a8404a4863192-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5579c431d7f5e88eed0b2c9884c4b6e7591fa8d54b9274b5fe9a8404a4863192/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5579c431d7f5e88eed0b2c9884c4b6e7591fa8d54b9274b5fe9a8404a4863192/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5579c431d7f5e88eed0b2c9884c4b6e7591fa8d54b9274b5fe9a8404a4863192/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220725164719-14919",
	                "Source": "/var/lib/docker/volumes/no-preload-20220725164719-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220725164719-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220725164719-14919",
	                "name.minikube.sigs.k8s.io": "no-preload-20220725164719-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4905c443ddaead38549ceeb1061d8ecf605772579655f9127b0e1ba8b821ba9b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50685"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50686"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50687"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50688"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50689"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4905c443ddae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220725164719-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ddc44e208687",
	                        "no-preload-20220725164719-14919"
	                    ],
	                    "NetworkID": "782d8a0b933ddac573007847cec70a531eee56f5c5e0713703bef5697069ae1d",
	                    "EndpointID": "f02c7e5ed58b7f718bc5210901e3a8c34b46ddb34a98d28c68bc204396a05cad",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220725164719-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220725164719-14919 logs -n 25: (2.674927924s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p calico-20220725163046-14919                    | calico-20220725163046-14919             | jenkins | v1.26.0 | 25 Jul 22 16:44 PDT | 25 Jul 22 16:44 PDT |
	| start   | -p bridge-20220725163045-14919                    | bridge-20220725163045-14919             | jenkins | v1.26.0 | 25 Jul 22 16:44 PDT | 25 Jul 22 16:44 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p false-20220725163046-14919                     | false-20220725163046-14919              | jenkins | v1.26.0 | 25 Jul 22 16:44 PDT | 25 Jul 22 16:44 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220725163045-14919                    | bridge-20220725163045-14919             | jenkins | v1.26.0 | 25 Jul 22 16:44 PDT | 25 Jul 22 16:44 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p false-20220725163046-14919                     | false-20220725163046-14919              | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	| delete  | -p bridge-20220725163045-14919                    | bridge-20220725163045-14919             | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	| start   | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| start   | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT | 25 Jul 22 16:46 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:47 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:53 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:50 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 16:51:53
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 16:51:53.294201   30645 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:51:53.294366   30645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:51:53.294371   30645 out.go:309] Setting ErrFile to fd 2...
	I0725 16:51:53.294375   30645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:51:53.294471   30645 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:51:53.294941   30645 out.go:303] Setting JSON to false
	I0725 16:51:53.309887   30645 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10036,"bootTime":1658783077,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:51:53.309984   30645 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:51:53.331402   30645 out.go:177] * [old-k8s-version-20220725164610-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:51:53.373600   30645 notify.go:193] Checking for updates...
	I0725 16:51:53.395513   30645 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:51:53.417111   30645 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:51:53.438407   30645 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:51:53.459736   30645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:51:53.481553   30645 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:51:53.504223   30645 config.go:178] Loaded profile config "old-k8s-version-20220725164610-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:51:53.526315   30645 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0725 16:51:53.547450   30645 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:51:53.618847   30645 docker.go:137] docker version: linux-20.10.17
	I0725 16:51:53.618995   30645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:51:53.753067   30645 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:51:53.688740284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:51:53.796714   30645 out.go:177] * Using the docker driver based on existing profile
	I0725 16:51:53.817466   30645 start.go:284] selected driver: docker
	I0725 16:51:53.817494   30645 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:53.817613   30645 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:51:53.820630   30645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:51:53.953927   30645 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:51:53.891132742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:51:53.954103   30645 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:51:53.954124   30645 cni.go:95] Creating CNI manager for ""
	I0725 16:51:53.954135   30645 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:51:53.954143   30645 start_flags.go:310] config:
	{Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:53.997664   30645 out.go:177] * Starting control plane node old-k8s-version-20220725164610-14919 in cluster old-k8s-version-20220725164610-14919
	I0725 16:51:54.018754   30645 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:51:54.039707   30645 out.go:177] * Pulling base image ...
	I0725 16:51:54.082764   30645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:51:54.082795   30645 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:51:54.082852   30645 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 16:51:54.082881   30645 cache.go:57] Caching tarball of preloaded images
	I0725 16:51:54.083082   30645 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:51:54.083106   30645 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 16:51:54.084260   30645 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json ...
	I0725 16:51:54.147078   30645 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:51:54.147095   30645 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:51:54.147107   30645 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:51:54.147181   30645 start.go:370] acquiring machines lock for old-k8s-version-20220725164610-14919: {Name:mk039986a3467f394c941873ee88acd0fb616596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:51:54.147261   30645 start.go:374] acquired machines lock for "old-k8s-version-20220725164610-14919" in 61.057µs
	I0725 16:51:54.147278   30645 start.go:95] Skipping create...Using existing machine configuration
	I0725 16:51:54.147288   30645 fix.go:55] fixHost starting: 
	I0725 16:51:54.147527   30645 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:51:54.215341   30645 fix.go:103] recreateIfNeeded on old-k8s-version-20220725164610-14919: state=Stopped err=<nil>
	W0725 16:51:54.215374   30645 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 16:51:54.259242   30645 out.go:177] * Restarting existing docker container for "old-k8s-version-20220725164610-14919" ...
	I0725 16:51:50.322882   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:52.874717   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:54.284887   30645 cli_runner.go:164] Run: docker start old-k8s-version-20220725164610-14919
	I0725 16:51:54.645993   30645 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:51:54.722808   30645 kic.go:415] container "old-k8s-version-20220725164610-14919" state is running.
	I0725 16:51:54.723439   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:54.808300   30645 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json ...
	I0725 16:51:54.808762   30645 machine.go:88] provisioning docker machine ...
	I0725 16:51:54.808790   30645 ubuntu.go:169] provisioning hostname "old-k8s-version-20220725164610-14919"
	I0725 16:51:54.808863   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:54.891385   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:54.891620   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:54.891634   30645 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220725164610-14919 && echo "old-k8s-version-20220725164610-14919" | sudo tee /etc/hostname
	I0725 16:51:55.024662   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220725164610-14919
	
	I0725 16:51:55.024757   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.103341   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.103525   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.103544   30645 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220725164610-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220725164610-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220725164610-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:51:55.230047   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:51:55.230076   30645 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:51:55.230107   30645 ubuntu.go:177] setting up certificates
	I0725 16:51:55.230119   30645 provision.go:83] configureAuth start
	I0725 16:51:55.230190   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:55.301676   30645 provision.go:138] copyHostCerts
	I0725 16:51:55.301768   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:51:55.301778   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:51:55.301894   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:51:55.302095   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:51:55.302104   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:51:55.302175   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:51:55.302315   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:51:55.302321   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:51:55.302379   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:51:55.302507   30645 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220725164610-14919 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220725164610-14919]
	I0725 16:51:55.405165   30645 provision.go:172] copyRemoteCerts
	I0725 16:51:55.405225   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:51:55.405293   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.477166   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:55.565264   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:51:55.582096   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0725 16:51:55.599314   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 16:51:55.616047   30645 provision.go:86] duration metric: configureAuth took 385.912561ms
	I0725 16:51:55.616059   30645 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:51:55.616211   30645 config.go:178] Loaded profile config "old-k8s-version-20220725164610-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:51:55.616261   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.687491   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.687629   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.687638   30645 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:51:55.809152   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:51:55.809170   30645 ubuntu.go:71] root file system type: overlay
	I0725 16:51:55.809333   30645 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:51:55.809407   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.886743   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.886909   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.886957   30645 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:51:56.015134   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:51:56.015230   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.087087   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:56.087253   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:56.087280   30645 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:51:56.212027   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:51:56.212044   30645 machine.go:91] provisioned docker machine in 1.403264453s
	I0725 16:51:56.212055   30645 start.go:307] post-start starting for "old-k8s-version-20220725164610-14919" (driver="docker")
	I0725 16:51:56.212061   30645 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:51:56.212133   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:51:56.212177   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.283031   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.372939   30645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:51:56.376433   30645 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:51:56.376447   30645 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:51:56.376454   30645 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:51:56.376458   30645 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:51:56.376467   30645 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:51:56.376572   30645 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:51:56.376727   30645 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:51:56.376875   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:51:56.383744   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:51:56.400937   30645 start.go:310] post-start completed in 188.872215ms
	I0725 16:51:56.401013   30645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:51:56.401059   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.472425   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.558421   30645 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:51:56.562865   30645 fix.go:57] fixHost completed within 2.41556105s
	I0725 16:51:56.562873   30645 start.go:82] releasing machines lock for "old-k8s-version-20220725164610-14919", held for 2.415589014s
	I0725 16:51:56.562940   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:56.634630   30645 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:51:56.634634   30645 ssh_runner.go:195] Run: systemctl --version
	I0725 16:51:56.634711   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.634710   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.712937   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.715060   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:57.028274   30645 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:51:57.039409   30645 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:51:57.039463   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:51:57.050978   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:51:57.064294   30645 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:51:57.131183   30645 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:51:57.197441   30645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:51:57.258729   30645 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:51:57.458205   30645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:51:57.493961   30645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:51:57.573579   30645 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 16:51:57.573720   30645 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220725164610-14919 dig +short host.docker.internal
	I0725 16:51:57.708897   30645 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:51:57.708998   30645 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:51:57.713113   30645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:51:57.723064   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:57.796445   30645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:51:57.796515   30645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:51:57.828170   30645 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:51:57.828195   30645 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:51:57.828273   30645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:51:57.862686   30645 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:51:57.862711   30645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:51:57.862784   30645 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:51:57.934841   30645 cni.go:95] Creating CNI manager for ""
	I0725 16:51:57.934857   30645 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:51:57.934882   30645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:51:57.934897   30645 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220725164610-14919 NodeName:old-k8s-version-20220725164610-14919 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:51:57.934999   30645 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220725164610-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220725164610-14919
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:51:57.935085   30645 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220725164610-14919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:51:57.935149   30645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 16:51:57.942882   30645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:51:57.942933   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:51:57.949836   30645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0725 16:51:57.962118   30645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:51:57.974768   30645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 16:51:57.987611   30645 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:51:57.991547   30645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:51:58.001422   30645 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919 for IP: 192.168.67.2
	I0725 16:51:58.001534   30645 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:51:58.001584   30645 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:51:58.001665   30645 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.key
	I0725 16:51:58.001725   30645 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key.c7fa3a9e
	I0725 16:51:58.001774   30645 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key
	I0725 16:51:58.001977   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:51:58.002018   30645 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:51:58.002033   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:51:58.002065   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:51:58.002099   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:51:58.002130   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:51:58.002200   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:51:58.002745   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:51:58.019176   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 16:51:58.035937   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:51:58.052722   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:51:58.069150   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:51:58.086282   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:51:58.104583   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:51:58.122151   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:51:58.138902   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:51:58.155678   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:51:58.172462   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:51:58.189680   30645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:51:58.202927   30645 ssh_runner.go:195] Run: openssl version
	I0725 16:51:58.208487   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:51:58.216327   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.220281   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.220320   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.225423   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:51:58.232569   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:51:58.240681   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.246603   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.246655   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.252424   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:51:58.259635   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:51:58.267350   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.271022   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.271059   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.276368   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:51:58.285978   30645 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:58.286085   30645 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:51:55.324035   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:57.821695   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:59.822225   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:58.315858   30645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:51:58.326514   30645 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 16:51:58.326531   30645 kubeadm.go:626] restartCluster start
	I0725 16:51:58.326585   30645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 16:51:58.333523   30645 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.333587   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:58.406233   30645 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220725164610-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:51:58.406423   30645 kubeconfig.go:127] "old-k8s-version-20220725164610-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 16:51:58.406758   30645 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:51:58.408147   30645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 16:51:58.416141   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.416194   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.424141   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.624252   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.624449   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.634727   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.824496   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.824556   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.833401   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.024564   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.024765   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.036943   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.224262   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.224449   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.234247   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.426277   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.426421   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.436848   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.624325   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.624444   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.634776   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.824436   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.824539   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.833466   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.024667   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.024784   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.034119   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.226332   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.226493   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.237410   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.424816   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.424991   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.435741   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.624358   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.624553   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.634929   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.824246   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.824311   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.833267   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.025582   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.025682   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.036617   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.226302   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.226523   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.237134   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.424681   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.424896   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.434950   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.434960   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.435004   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.443251   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.443262   30645 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 16:52:01.443270   30645 kubeadm.go:1092] stopping kube-system containers ...
	I0725 16:52:01.443330   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:52:01.472271   30645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 16:52:01.482849   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:52:01.490579   30645 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Jul 25 23:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jul 25 23:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Jul 25 23:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jul 25 23:48 /etc/kubernetes/scheduler.conf
	
	I0725 16:52:01.490646   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 16:52:01.497991   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 16:52:01.505650   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 16:52:01.513404   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 16:52:01.520481   30645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:52:01.528605   30645 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 16:52:01.528616   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:01.582488   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.177208   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.396495   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.452157   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.507122   30645 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:52:02.507183   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:03.017988   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:01.823344   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:04.322726   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:03.516813   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:04.016024   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:04.516243   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:05.016052   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:05.516842   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:06.018016   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:06.516243   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:07.016833   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:07.516509   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:08.018237   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:06.821972   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:08.822475   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:08.516285   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:09.018225   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:09.516196   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:10.016108   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:10.518092   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:11.016235   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:11.516051   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:12.017661   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:12.517835   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:13.017094   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:11.324833   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:13.821393   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:13.517087   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:14.016089   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:14.516418   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:15.016429   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:15.516149   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:16.016347   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:16.516154   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:17.016835   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:17.516145   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:18.016344   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:15.823039   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:17.824060   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:18.516408   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:19.016498   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:19.517496   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:20.016992   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:20.516251   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:21.016222   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:21.517681   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:22.016475   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:22.516287   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:23.018246   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:20.324836   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:22.822073   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:24.822724   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:23.516453   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:24.016928   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:24.518267   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:25.016180   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:25.517130   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:26.016427   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:26.516198   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:27.018318   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:27.518273   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:28.017144   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:26.823978   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:29.324885   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:28.517115   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:29.016589   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:29.516148   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:30.018359   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:30.516196   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:31.016729   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:31.516466   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:32.016321   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:32.516187   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:33.016955   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:31.823607   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:34.323121   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:33.518380   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:34.016250   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:34.518380   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:35.017698   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:35.516226   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:36.016845   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:36.517175   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:37.016458   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:37.518343   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:38.017221   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:36.823814   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:38.824757   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:38.516631   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:39.018346   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:39.517031   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:40.016587   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:40.518374   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:41.017168   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:41.516254   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:42.016786   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:42.518371   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:43.016708   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:41.324898   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:43.821522   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:43.517350   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:44.016879   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:44.516359   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:45.016326   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:45.517079   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:46.018104   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:46.516554   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:47.016350   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:47.516869   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:48.016960   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:45.822541   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:48.322265   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:48.518539   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:49.016387   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:49.518485   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:50.016779   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:50.516308   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:51.016390   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:51.516855   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:52.016682   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:52.516798   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:53.017157   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:50.325107   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:52.822776   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:54.822863   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:53.516791   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:54.018461   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:54.518509   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:55.016394   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:55.518239   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:56.016393   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:56.516649   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:57.018403   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:57.518492   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:58.016728   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:57.322195   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:59.325110   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:58.516610   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:59.016695   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:59.516374   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:00.018527   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:00.516554   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:01.016461   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:01.518568   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:02.018357   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:02.516570   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:02.551458   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.551470   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:02.551529   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:02.580662   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.580676   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:02.580736   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:02.609061   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.609077   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:02.609153   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:02.637777   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.637789   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:02.637848   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:02.668016   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.668032   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:02.668098   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:02.695681   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.695695   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:02.695759   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:02.724166   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.724179   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:02.724241   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:02.752726   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.752738   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:02.752745   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:02.752752   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:02.766718   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:02.766729   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:01.823541   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:03.823599   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:04.817904   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051150373s)
	I0725 16:53:04.818052   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:04.818058   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:04.859354   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:04.859367   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:04.872868   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:04.872886   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:04.925729   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:07.427981   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:07.518459   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:07.547888   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.547903   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:07.547963   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:07.577077   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.577088   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:07.577149   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:07.605370   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.605382   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:07.605438   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:07.634582   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.634594   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:07.634664   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:07.662717   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.662730   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:07.662796   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:07.690179   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.690191   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:07.690247   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:07.718778   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.718797   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:07.718860   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:07.750543   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.750557   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:07.750566   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:07.750582   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:07.813932   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:07.813946   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:07.813953   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:07.830288   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:07.830306   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:06.323264   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:08.822901   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:09.316485   30296 pod_ready.go:81] duration metric: took 4m0.003871836s waiting for pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace to be "Ready" ...
	E0725 16:53:09.316502   30296 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 16:53:09.316516   30296 pod_ready.go:38] duration metric: took 4m13.56403836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:53:09.316548   30296 kubeadm.go:630] restartCluster took 4m23.75685988s
	W0725 16:53:09.316641   30296 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 16:53:09.316663   30296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 16:53:11.757360   30296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.440666286s)
	I0725 16:53:11.757423   30296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:53:11.767435   30296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:53:11.775142   30296 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:53:11.775192   30296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:53:11.782840   30296 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:53:11.782879   30296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:53:12.113028   30296 out.go:204]   - Generating certificates and keys ...
	I0725 16:53:09.887017   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056682522s)
	I0725 16:53:09.887208   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:09.887216   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:09.934241   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:09.934269   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:12.447495   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:12.517256   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:12.548709   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.548724   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:12.548801   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:12.581560   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.581573   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:12.581636   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:12.613258   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.613277   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:12.613356   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:12.645116   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.645132   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:12.645192   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:12.678405   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.678430   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:12.678496   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:12.709850   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.709862   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:12.709929   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:12.739704   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.739717   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:12.739780   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:12.771373   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.771390   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:12.771397   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:12.771409   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:13.124509   30296 out.go:204]   - Booting up control plane ...
	I0725 16:53:14.832595   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061157284s)
	I0725 16:53:14.832749   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:14.832760   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:14.882568   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:14.882589   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:14.894614   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:14.894627   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:14.964822   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:14.964845   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:14.964855   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:17.480696   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:17.516779   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:17.560432   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.560445   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:17.560504   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:17.590394   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.590408   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:17.590480   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:17.620155   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.620169   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:17.620234   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:17.651346   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.651376   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:17.651448   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:17.683049   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.683062   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:17.683121   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:17.720876   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.720905   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:17.720964   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:17.768214   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.768254   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:17.768357   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:17.800978   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.800991   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:17.800999   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:17.801005   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:17.814855   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:17.814871   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:20.175651   30296 out.go:204]   - Configuring RBAC rules ...
	I0725 16:53:19.878600   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063699299s)
	I0725 16:53:19.878715   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:19.878726   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:19.927808   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:19.927830   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:19.942138   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:19.942177   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:20.000061   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:22.501063   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:22.516620   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:22.546166   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.546178   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:22.546235   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:22.574812   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.574824   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:22.574886   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:22.604962   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.604974   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:22.605036   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:22.636264   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.636278   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:22.636339   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:22.665920   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.665932   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:22.665993   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:22.696167   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.696179   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:22.696236   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:22.729381   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.729392   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:22.729454   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:22.768159   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.768172   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:22.768207   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:22.768215   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:22.813804   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:22.813818   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:22.826686   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:22.826700   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:22.889943   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:22.889958   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:22.889964   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:22.905871   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:22.905885   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:20.554422   30296 cni.go:95] Creating CNI manager for ""
	I0725 16:53:20.554435   30296 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:53:20.554455   30296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 16:53:20.554509   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b minikube.k8s.io/name=no-preload-20220725164719-14919 minikube.k8s.io/updated_at=2022_07_25T16_53_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:20.554518   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:20.815468   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:20.815488   30296 ops.go:34] apiserver oom_adj: -16
	I0725 16:53:21.371512   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:21.872708   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:22.372928   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:22.871252   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:23.371865   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:23.872764   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:24.372857   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:24.871534   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:24.961550   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055639315s)
	I0725 16:53:27.462514   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:27.516705   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:27.547013   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.547025   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:27.547088   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:27.575083   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.575095   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:27.575151   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:27.607755   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.607767   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:27.607822   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:27.636173   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.636184   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:27.636251   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:27.664856   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.664867   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:27.664930   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:27.695642   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.695655   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:27.695717   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:27.725344   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.725358   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:27.725417   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:27.754182   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.754195   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:27.754202   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:27.754208   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:27.767896   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:27.767911   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:27.824064   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:27.824076   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:27.824083   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:27.838119   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:27.838131   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:25.371471   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:25.872363   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:26.372010   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:26.871172   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:27.371984   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:27.871600   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:28.371423   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:28.872789   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:29.372643   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:29.872028   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:29.892047   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053889683s)
	I0725 16:53:29.892158   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:29.892165   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:32.435110   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:32.516701   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:32.562525   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.562538   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:32.562604   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:32.599075   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.599087   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:32.599145   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:32.640588   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.640615   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:32.640684   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:32.675235   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.675248   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:32.675311   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:32.711380   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.711392   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:32.711462   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:32.745360   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.745373   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:32.745433   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:32.782468   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.782484   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:32.782569   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:32.815537   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.815551   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:32.815557   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:32.815565   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:32.828567   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:32.828584   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:32.884919   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:32.884933   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:32.884941   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:32.900762   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:32.900776   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:30.373362   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:30.873259   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:31.373357   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:31.871239   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:32.372542   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:32.871171   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:33.372834   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:33.432185   30296 kubeadm.go:1045] duration metric: took 12.877635728s to wait for elevateKubeSystemPrivileges.
	I0725 16:53:33.432203   30296 kubeadm.go:397] StartCluster complete in 4m47.911603505s
	I0725 16:53:33.432223   30296 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:53:33.432300   30296 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:53:33.432839   30296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:53:33.947550   30296 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220725164719-14919" rescaled to 1
	I0725 16:53:33.947586   30296 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 16:53:33.947600   30296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 16:53:33.947630   30296 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 16:53:33.970331   30296 out.go:177] * Verifying Kubernetes components...
	I0725 16:53:33.947781   30296 config.go:178] Loaded profile config "no-preload-20220725164719-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:53:33.970396   30296 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220725164719-14919"
	I0725 16:53:33.970401   30296 addons.go:65] Setting dashboard=true in profile "no-preload-20220725164719-14919"
	I0725 16:53:33.970409   30296 addons.go:65] Setting metrics-server=true in profile "no-preload-20220725164719-14919"
	I0725 16:53:33.970413   30296 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220725164719-14919"
	I0725 16:53:34.031972   30296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220725164719-14919"
	I0725 16:53:34.031978   30296 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220725164719-14919"
	I0725 16:53:34.031985   30296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:53:34.031986   30296 addons.go:153] Setting addon metrics-server=true in "no-preload-20220725164719-14919"
	I0725 16:53:34.031983   30296 addons.go:153] Setting addon dashboard=true in "no-preload-20220725164719-14919"
	W0725 16:53:34.031995   30296 addons.go:162] addon metrics-server should already be in state true
	W0725 16:53:34.031999   30296 addons.go:162] addon storage-provisioner should already be in state true
	W0725 16:53:34.032003   30296 addons.go:162] addon dashboard should already be in state true
	I0725 16:53:34.032038   30296 host.go:66] Checking if "no-preload-20220725164719-14919" exists ...
	I0725 16:53:34.032039   30296 host.go:66] Checking if "no-preload-20220725164719-14919" exists ...
	I0725 16:53:34.032073   30296 host.go:66] Checking if "no-preload-20220725164719-14919" exists ...
	I0725 16:53:34.032281   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.033354   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.033363   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.033360   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.043603   30296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 16:53:34.057441   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.186692   30296 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 16:53:34.207298   30296 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 16:53:34.208731   30296 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220725164719-14919"
	I0725 16:53:34.228297   30296 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	W0725 16:53:34.249380   30296 addons.go:162] addon default-storageclass should already be in state true
	I0725 16:53:34.261772   30296 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220725164719-14919" to be "Ready" ...
	I0725 16:53:34.270416   30296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 16:53:34.270448   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 16:53:34.270460   30296 host.go:66] Checking if "no-preload-20220725164719-14919" exists ...
	I0725 16:53:34.291176   30296 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 16:53:34.291315   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.291737   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.296062   30296 node_ready.go:49] node "no-preload-20220725164719-14919" has status "Ready":"True"
	I0725 16:53:34.333550   30296 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 16:53:34.333555   30296 node_ready.go:38] duration metric: took 42.367204ms waiting for node "no-preload-20220725164719-14919" to be "Ready" ...
	I0725 16:53:34.333562   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 16:53:34.312459   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 16:53:34.333565   30296 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:53:34.333595   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 16:53:34.333628   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.333685   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.342717   30296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-pk97r" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:34.441662   30296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50685 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/no-preload-20220725164719-14919/id_rsa Username:docker}
	I0725 16:53:34.442634   30296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50685 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/no-preload-20220725164719-14919/id_rsa Username:docker}
	I0725 16:53:34.442716   30296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50685 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/no-preload-20220725164719-14919/id_rsa Username:docker}
	I0725 16:53:34.444590   30296 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 16:53:34.444602   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 16:53:34.444658   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.526466   30296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50685 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/no-preload-20220725164719-14919/id_rsa Username:docker}
	I0725 16:53:34.608549   30296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 16:53:34.609187   30296 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 16:53:34.609201   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 16:53:34.611760   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 16:53:34.611786   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 16:53:34.634836   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 16:53:34.634860   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 16:53:34.638455   30296 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 16:53:34.638473   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 16:53:34.710670   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 16:53:34.710687   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 16:53:34.718410   30296 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 16:53:34.718428   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 16:53:34.727332   30296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 16:53:34.738119   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 16:53:34.738133   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 16:53:34.744747   30296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 16:53:34.823409   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 16:53:34.823440   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 16:53:34.918418   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 16:53:34.918433   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 16:53:35.013403   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 16:53:35.013429   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 16:53:35.111110   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 16:53:35.111127   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 16:53:35.114460   30296 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.070811321s)
	I0725 16:53:35.114484   30296 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 16:53:35.138037   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 16:53:35.138060   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 16:53:35.230663   30296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 16:53:35.542338   30296 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220725164719-14919"
	I0725 16:53:36.412013   30296 pod_ready.go:102] pod "coredns-6d4b75cb6d-pk97r" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:36.847930   30296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.617220122s)
	I0725 16:53:36.872489   30296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 16:53:34.964971   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064168222s)
	I0725 16:53:34.965217   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:34.965226   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:37.509560   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:38.016974   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:38.049541   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.049558   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:38.049618   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:38.080721   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.080733   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:38.080816   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:38.109733   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.109744   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:38.109803   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:38.141301   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.141313   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:38.141400   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:38.172007   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.172020   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:38.172078   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:38.204450   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.204463   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:38.204520   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:38.234269   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.234281   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:38.234336   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:38.263197   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.263210   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:38.263217   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:38.263223   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:36.893973   30296 addons.go:414] enableAddons completed in 2.946347446s
	I0725 16:53:38.857910   30296 pod_ready.go:92] pod "coredns-6d4b75cb6d-pk97r" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:38.857924   30296 pod_ready.go:81] duration metric: took 4.51514414s waiting for pod "coredns-6d4b75cb6d-pk97r" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:38.857932   30296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-zc96c" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.868381   30296 pod_ready.go:92] pod "coredns-6d4b75cb6d-zc96c" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:39.868398   30296 pod_ready.go:81] duration metric: took 1.010452431s waiting for pod "coredns-6d4b75cb6d-zc96c" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.868406   30296 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.900327   30296 pod_ready.go:92] pod "etcd-no-preload-20220725164719-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:39.900341   30296 pod_ready.go:81] duration metric: took 31.928357ms waiting for pod "etcd-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.900352   30296 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.905064   30296 pod_ready.go:92] pod "kube-apiserver-no-preload-20220725164719-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:39.905074   30296 pod_ready.go:81] duration metric: took 4.716348ms waiting for pod "kube-apiserver-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.905080   30296 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.910729   30296 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220725164719-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:39.910741   30296 pod_ready.go:81] duration metric: took 5.655476ms waiting for pod "kube-controller-manager-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.910748   30296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r8xpz" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:40.055516   30296 pod_ready.go:92] pod "kube-proxy-r8xpz" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:40.055529   30296 pod_ready.go:81] duration metric: took 144.773719ms waiting for pod "kube-proxy-r8xpz" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:40.055537   30296 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:40.457530   30296 pod_ready.go:92] pod "kube-scheduler-no-preload-20220725164719-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:40.457544   30296 pod_ready.go:81] duration metric: took 401.996265ms waiting for pod "kube-scheduler-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:40.457550   30296 pod_ready.go:38] duration metric: took 6.123930085s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:53:40.457571   30296 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:53:40.457632   30296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:40.470111   30296 api_server.go:71] duration metric: took 6.522458607s to wait for apiserver process to appear ...
	I0725 16:53:40.470126   30296 api_server.go:87] waiting for apiserver healthz status ...
	I0725 16:53:40.470134   30296 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50689/healthz ...
	I0725 16:53:40.476144   30296 api_server.go:266] https://127.0.0.1:50689/healthz returned 200:
	ok
	I0725 16:53:40.477496   30296 api_server.go:140] control plane version: v1.24.3
	I0725 16:53:40.477505   30296 api_server.go:130] duration metric: took 7.374951ms to wait for apiserver health ...
	I0725 16:53:40.477510   30296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 16:53:40.658214   30296 system_pods.go:59] 9 kube-system pods found
	I0725 16:53:40.658228   30296 system_pods.go:61] "coredns-6d4b75cb6d-pk97r" [5c8dd765-07cc-442c-a097-c898019f7c02] Running
	I0725 16:53:40.658233   30296 system_pods.go:61] "coredns-6d4b75cb6d-zc96c" [d09478f3-429d-4f03-891b-19ac59672799] Running
	I0725 16:53:40.658237   30296 system_pods.go:61] "etcd-no-preload-20220725164719-14919" [888ae756-4b50-408b-9e35-272e796ae5d4] Running
	I0725 16:53:40.658241   30296 system_pods.go:61] "kube-apiserver-no-preload-20220725164719-14919" [f2572bd5-989c-414c-8cdb-f771c052fec7] Running
	I0725 16:53:40.658244   30296 system_pods.go:61] "kube-controller-manager-no-preload-20220725164719-14919" [31b0f2fc-9b4d-416d-b3da-c3d7c2038175] Running
	I0725 16:53:40.658248   30296 system_pods.go:61] "kube-proxy-r8xpz" [9d89a226-d4b6-4543-9b95-c04b32e36bb3] Running
	I0725 16:53:40.658251   30296 system_pods.go:61] "kube-scheduler-no-preload-20220725164719-14919" [b2d6b72d-19b5-463e-9d34-81719d09e606] Running
	I0725 16:53:40.658257   30296 system_pods.go:61] "metrics-server-5c6f97fb75-p6xmp" [e4b5868d-0220-4d63-8b47-1ed865b090cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 16:53:40.658262   30296 system_pods.go:61] "storage-provisioner" [96b01ade-dad8-4551-a42e-ec5920059ae9] Running
	I0725 16:53:40.658266   30296 system_pods.go:74] duration metric: took 180.75083ms to wait for pod list to return data ...
	I0725 16:53:40.658271   30296 default_sa.go:34] waiting for default service account to be created ...
	I0725 16:53:40.855061   30296 default_sa.go:45] found service account: "default"
	I0725 16:53:40.855072   30296 default_sa.go:55] duration metric: took 196.796ms for default service account to be created ...
	I0725 16:53:40.855082   30296 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 16:53:41.059771   30296 system_pods.go:86] 9 kube-system pods found
	I0725 16:53:41.059786   30296 system_pods.go:89] "coredns-6d4b75cb6d-pk97r" [5c8dd765-07cc-442c-a097-c898019f7c02] Running
	I0725 16:53:41.059791   30296 system_pods.go:89] "coredns-6d4b75cb6d-zc96c" [d09478f3-429d-4f03-891b-19ac59672799] Running
	I0725 16:53:41.059795   30296 system_pods.go:89] "etcd-no-preload-20220725164719-14919" [888ae756-4b50-408b-9e35-272e796ae5d4] Running
	I0725 16:53:41.059799   30296 system_pods.go:89] "kube-apiserver-no-preload-20220725164719-14919" [f2572bd5-989c-414c-8cdb-f771c052fec7] Running
	I0725 16:53:41.059807   30296 system_pods.go:89] "kube-controller-manager-no-preload-20220725164719-14919" [31b0f2fc-9b4d-416d-b3da-c3d7c2038175] Running
	I0725 16:53:41.059813   30296 system_pods.go:89] "kube-proxy-r8xpz" [9d89a226-d4b6-4543-9b95-c04b32e36bb3] Running
	I0725 16:53:41.059817   30296 system_pods.go:89] "kube-scheduler-no-preload-20220725164719-14919" [b2d6b72d-19b5-463e-9d34-81719d09e606] Running
	I0725 16:53:41.059823   30296 system_pods.go:89] "metrics-server-5c6f97fb75-p6xmp" [e4b5868d-0220-4d63-8b47-1ed865b090cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 16:53:41.059829   30296 system_pods.go:89] "storage-provisioner" [96b01ade-dad8-4551-a42e-ec5920059ae9] Running
	I0725 16:53:41.059835   30296 system_pods.go:126] duration metric: took 204.745744ms to wait for k8s-apps to be running ...
	I0725 16:53:41.059840   30296 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 16:53:41.059893   30296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:53:41.070559   30296 system_svc.go:56] duration metric: took 10.713694ms WaitForService to wait for kubelet.
	I0725 16:53:41.070573   30296 kubeadm.go:572] duration metric: took 7.122919076s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 16:53:41.070592   30296 node_conditions.go:102] verifying NodePressure condition ...
	I0725 16:53:41.255861   30296 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 16:53:41.255873   30296 node_conditions.go:123] node cpu capacity is 6
	I0725 16:53:41.255884   30296 node_conditions.go:105] duration metric: took 185.287059ms to run NodePressure ...
	I0725 16:53:41.255894   30296 start.go:216] waiting for startup goroutines ...
	I0725 16:53:41.289611   30296 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 16:53:41.311552   30296 out.go:177] * Done! kubectl is now configured to use "no-preload-20220725164719-14919" cluster and "default" namespace by default
	I0725 16:53:40.321875   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058620912s)
	I0725 16:53:40.321982   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:40.321997   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:40.368300   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:40.368320   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:40.382186   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:40.382201   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:40.442970   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:40.442981   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:40.442987   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:42.961513   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:43.017747   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:43.047988   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.048000   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:43.048060   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:43.082642   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.082655   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:43.082783   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:43.112812   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.112825   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:43.112882   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:43.142469   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.142480   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:43.142543   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:43.172983   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.172996   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:43.173055   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:43.202378   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.202390   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:43.202456   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:43.232448   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.232462   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:43.232525   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:43.262110   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.262123   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:43.262132   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:43.262140   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:45.319732   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057561012s)
	I0725 16:53:45.319846   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:45.319854   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:45.365923   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:45.365943   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:45.379753   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:45.379771   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:45.457284   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:45.457297   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:45.457305   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:47.975040   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:48.018317   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:48.049476   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.049489   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:48.049548   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:48.078953   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.078965   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:48.079037   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:48.109058   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.109071   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:48.109129   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:48.139159   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.139172   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:48.139228   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:48.169256   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.169267   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:48.169325   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:48.201872   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.201885   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:48.201948   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:48.234103   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.234115   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:48.234178   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:48.266166   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.266179   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:48.266186   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:48.266197   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:48.314601   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:48.318681   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:48.332826   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:48.332841   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:48.388055   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:48.388067   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:48.388075   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:48.402457   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:48.402469   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:50.456667   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054172699s)
	I0725 16:53:52.958273   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:53.018286   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:53.051254   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.051266   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:53.051325   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:53.080846   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.080858   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:53.080914   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:53.109160   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.109183   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:53.109257   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:53.137615   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.137628   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:53.137684   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:53.167697   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.167709   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:53.167765   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:53.198156   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.198169   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:53.198278   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:53.227704   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.227716   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:53.227773   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:53.257307   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.257320   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:53.257327   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:53.257336   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:53.299296   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:53.317934   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:53.330698   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:53.330712   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:53.385054   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:53.385066   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:53.385073   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:53.399132   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:53.399145   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:55.451174   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052002587s)
	I0725 16:53:57.951589   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:58.016855   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:58.049205   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.049216   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:58.049274   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:58.079929   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.079941   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:58.080000   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:58.109713   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.109725   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:58.109785   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:58.138994   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.139008   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:58.139116   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:58.168661   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.168675   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:58.168733   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:58.197795   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.197807   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:58.197867   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:58.226708   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.226719   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:58.226777   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:58.255098   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.255109   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:58.255116   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:58.255123   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:58.295859   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:58.317170   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:58.329926   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:58.329941   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:58.382781   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:58.382793   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:58.382826   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:58.397360   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:58.397372   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:00.450881   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053483262s)
	I0725 16:54:02.951232   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:03.018983   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:03.050556   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.050569   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:03.050627   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:03.079230   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.079242   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:03.079298   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:03.108412   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.108425   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:03.108483   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:03.136613   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.136626   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:03.136688   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:03.165794   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.165805   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:03.165862   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:03.194455   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.194471   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:03.194539   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:03.226412   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.226426   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:03.226490   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:03.261052   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.261064   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:03.261072   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:03.261081   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:05.315384   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054277623s)
	I0725 16:54:05.315492   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:05.315500   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:05.354732   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:05.354744   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:05.366506   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:05.366519   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:05.419168   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:05.419178   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:05.419185   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:07.935013   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:08.017181   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:08.048536   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.048557   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:08.048619   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:08.080579   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.080592   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:08.080652   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:08.108274   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.108287   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:08.108346   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:08.138319   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.138331   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:08.138390   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:08.168384   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.168395   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:08.168452   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:08.198022   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.198034   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:08.198092   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:08.226920   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.226933   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:08.226991   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:08.257052   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.257063   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:08.257070   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:08.257078   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:08.268657   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:08.268690   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:08.320782   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:08.320793   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:08.320799   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:08.334711   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:08.334722   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:10.390667   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05591657s)
	I0725 16:54:10.390776   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:10.390784   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:12.930154   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:13.016938   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:13.046701   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.046713   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:13.046769   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:13.076212   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.076225   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:13.076282   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:13.106089   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.106099   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:13.106147   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:13.136688   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.136702   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:13.136762   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:13.166341   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.166353   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:13.166412   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:13.194833   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.194844   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:13.194910   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:13.223450   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.223462   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:13.223522   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:13.253571   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.253583   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:13.253590   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:13.253596   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:13.296069   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:13.296080   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:13.308497   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:13.317701   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:13.373112   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:13.373126   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:13.373135   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:13.387086   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:13.387099   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:15.443702   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056574496s)
	I0725 16:54:17.946094   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:18.019154   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:18.050260   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.050273   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:18.050335   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:18.079777   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.079789   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:18.079847   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:18.111380   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.111393   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:18.111445   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:18.143959   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.143969   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:18.144021   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:18.180312   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.180332   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:18.180399   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:18.215895   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.215911   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:18.215963   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:18.252789   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.252802   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:18.252852   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:18.290782   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.290810   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:18.290818   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:18.290847   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:18.303512   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:18.317352   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:18.376087   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:18.376098   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:18.376106   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:18.390833   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:18.390853   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:20.449118   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05823903s)
	I0725 16:54:20.449231   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:20.449238   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:22.992397   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:23.017255   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:23.045826   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.045844   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:23.045915   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:23.075162   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.075174   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:23.075229   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:23.105247   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.105260   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:23.105315   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:23.134037   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.134056   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:23.134113   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:23.163197   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.163211   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:23.163269   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:23.192645   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.192657   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:23.192714   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:23.220793   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.220804   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:23.220863   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:23.250836   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.250847   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:23.250854   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:23.250860   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:25.307612   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056726692s)
	I0725 16:54:25.307719   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:25.307726   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:25.346156   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:25.346168   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:25.358492   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:25.358504   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:25.410340   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:25.410351   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:25.410358   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:27.924097   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:28.017834   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:28.049566   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.049580   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:28.049646   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:28.079671   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.079685   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:28.079744   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:28.108629   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.108641   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:28.108696   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:28.137881   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.137893   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:28.137954   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:28.166821   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.166834   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:28.166898   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:28.196515   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.196527   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:28.196590   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:28.225959   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.225971   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:28.226028   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:28.254555   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.254567   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:28.254574   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:28.254581   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:30.308050   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053443356s)
	I0725 16:54:30.308156   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:30.308162   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:30.347803   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:30.347816   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:30.360116   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:30.360128   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:30.413675   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:30.413687   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:30.413693   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:32.929655   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:33.019242   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:33.052472   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.052485   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:33.052542   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:33.081513   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.081531   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:33.081586   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:33.112328   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.112340   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:33.112399   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:33.140741   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.140755   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:33.140820   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:33.171364   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.171382   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:33.171441   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:33.203103   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.203116   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:33.203176   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:33.233444   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.233456   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:33.233522   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:33.265044   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.265056   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:33.265063   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:33.265071   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:48:41 UTC, end at Mon 2022-07-25 23:54:37 UTC. --
	Jul 25 23:53:10 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:10.563916290Z" level=info msg="ignoring event" container=4fba798b711ddd64d9ff9bfe7cb49f85d35aed1f399f6941771c57d4cb9a3622 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:10 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:10.636051969Z" level=info msg="ignoring event" container=bdd71eddbd7c3b40b3ffd0a56132620bc15eae61a0b9e0ad9c5158cef7742169 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:10 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:10.742509645Z" level=info msg="ignoring event" container=4e4aaff7a7fd5dd0666f44054dc5f07416e46d159accf0449bf1cf1bc5ac08d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:10 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:10.817768714Z" level=info msg="ignoring event" container=d15b92e161c4e65b14db68e4c27b61a5b1d4520da032a11258dee4bc8420944f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:10 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:10.884080291Z" level=info msg="ignoring event" container=829f65d4842bc79a48d1135be17c2992534537de97f9d73f8c9ce30adbfe4a28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:10 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:10.954167472Z" level=info msg="ignoring event" container=51b13151cda09a31c7f07b36e7af955cfdfa4c09f8c4870eab94f1bf12a5b18f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.038794525Z" level=info msg="ignoring event" container=98fe17bcba956ef7e47218b4d5bc668dc58a4e2af4c9d8dae663da40b1b26ffb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.110105825Z" level=info msg="ignoring event" container=92548eb878584890ad3b6d104da9daabf2c90ffa04d529d7e77b4fa21e5a9253 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.228089209Z" level=info msg="ignoring event" container=e51c868f6ca7aeb1a5b57e8fee62c6dbccc46b777a3d659cea5fc5048c45fb66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.297820545Z" level=info msg="ignoring event" container=8db9fb247dba9757e04a108b4b71f18f864c4d144c4607357d0863fe12df5b21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.377998687Z" level=info msg="ignoring event" container=66e62855ca93d0c2525f04999ef9cf81f26612ee5586ed17983cd5764ed17f02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:36 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:36.571409006Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:36 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:36.571431939Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:36 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:36.573611331Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:38 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:38.181780075Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 23:53:38 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:38.494867374Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.117737604Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.177160806Z" level=info msg="ignoring event" container=b3338e80827364cee061681880366d77453088d80ea5d4d0649216c6dfa4abab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.620389575Z" level=info msg="ignoring event" container=d78e67bf19f151d984686a3944cf7b8f4e07f1ed8150c85d7e72538359ff65f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.720606079Z" level=info msg="ignoring event" container=112d188f94f9b91bd7b740e94cabea00e1f9b8f861ab97c0485ce66f2d0d2222 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.785404270Z" level=info msg="ignoring event" container=3e8a6b8ba991b939651b7a9f06182d7460cd18d48d1d58527096504562dd58b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:49 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:49.640069131Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:49 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:49.640520790Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:49 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:49.641733473Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:54:00 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:54:00.763004778Z" level=info msg="ignoring event" container=8fb3d6ea2c67df0206acc1e6d0beac72517f3ba4765f4954b0a044d792ebd6fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	8fb3d6ea2c67d       a90209bb39e3d                                                                                    37 seconds ago       Exited              dashboard-metrics-scraper   2                   e3479f6304fca
	1187dc7cc8b13       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   b04cd97d9099a
	93d6948c9c496       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   f7a9ea66aab0b
	f7dcf25a62514       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   70044590c4c53
	b6606566197a3       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   e79f825231149
	fb877ae4dac25       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   ca3dbd6e8c99d
	a6a7fdc4f7300       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   3a2bc012ba6a7
	8ac3f526bf4fd       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   58bd6152d6137
	7667f4a88453a       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   b9af40698474e
	
	* 
	* ==> coredns [f7dcf25a6251] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220725164719-14919
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220725164719-14919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b
	                    minikube.k8s.io/name=no-preload-20220725164719-14919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T16_53_20_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 23:53:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220725164719-14919
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 23:54:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 23:54:35 +0000   Mon, 25 Jul 2022 23:53:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 23:54:35 +0000   Mon, 25 Jul 2022 23:53:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 23:54:35 +0000   Mon, 25 Jul 2022 23:53:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 23:54:35 +0000   Mon, 25 Jul 2022 23:53:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-20220725164719-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                82882aca-6043-459a-8f9a-a031699e1ba4
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-pk97r                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     65s
	  kube-system                 etcd-no-preload-20220725164719-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-no-preload-20220725164719-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-no-preload-20220725164719-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-r8xpz                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-no-preload-20220725164719-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 metrics-server-5c6f97fb75-p6xmp                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-7dmwv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-9c5cf                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  NodeHasSufficientMemory  84s (x5 over 85s)  kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x4 over 85s)  kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x4 over 85s)  kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientPID
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s                kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientPID
	  Normal  NodeReady                78s                kubelet          Node no-preload-20220725164719-14919 status is now: NodeReady
	  Normal  RegisteredNode           66s                node-controller  Node no-preload-20220725164719-14919 event: Registered Node no-preload-20220725164719-14919 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [8ac3f526bf4f] <==
	* {"level":"info","ts":"2022-07-25T23:53:14.825Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T23:53:15.027Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-20220725164719-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T23:53:15.027Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:53:15.027Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:53:15.027Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:53:15.028Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:53:15.028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:53:15.028Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:53:15.028Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T23:53:15.029Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T23:53:15.029Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T23:53:15.029Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T23:53:20.260Z","caller":"traceutil/trace.go:171","msg":"trace[1187369827] transaction","detail":"{read_only:false; response_revision:229; number_of_response:1; }","duration":"107.802684ms","start":"2022-07-25T23:53:20.152Z","end":"2022-07-25T23:53:20.260Z","steps":["trace[1187369827] 'process raft request'  (duration: 33.093498ms)","trace[1187369827] 'compare'  (duration: 74.367545ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  23:54:38 up  1:01,  0 users,  load average: 0.62, 1.05, 1.25
	Linux no-preload-20220725164719-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [7667f4a88453] <==
	* I0725 23:53:19.876472       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 23:53:20.416403       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 23:53:20.421540       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 23:53:20.429665       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 23:53:20.503648       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 23:53:33.532114       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 23:53:33.582788       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 23:53:34.372721       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 23:53:35.544816       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.105.20.57]
	W0725 23:53:36.424216       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 23:53:36.424293       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 23:53:36.424307       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 23:53:36.424330       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 23:53:36.424540       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 23:53:36.425542       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 23:53:36.842060       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.107.143.128]
	I0725 23:53:36.860677       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.165.83]
	W0725 23:54:36.382644       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 23:54:36.382688       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 23:54:36.382694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 23:54:36.383731       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 23:54:36.383941       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 23:54:36.383985       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [a6a7fdc4f730] <==
	* I0725 23:53:33.734019       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zc96c"
	I0725 23:53:33.739512       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pk97r"
	I0725 23:53:33.760180       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-zc96c"
	I0725 23:53:35.360047       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 23:53:35.363914       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 23:53:35.426011       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 23:53:35.431240       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-p6xmp"
	I0725 23:53:36.695737       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 23:53:36.701666       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 23:53:36.703776       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0725 23:53:36.725053       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.725464       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 23:53:36.730312       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 23:53:36.730694       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.730807       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 23:53:36.736159       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.736485       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 23:53:36.739209       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.739445       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 23:53:36.745158       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.745192       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 23:53:36.752244       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-9c5cf"
	I0725 23:53:36.828478       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-7dmwv"
	E0725 23:54:35.106677       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0725 23:54:35.123952       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b6606566197a] <==
	* I0725 23:53:34.270515       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 23:53:34.270601       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 23:53:34.270683       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 23:53:34.366134       1 server_others.go:206] "Using iptables Proxier"
	I0725 23:53:34.366248       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 23:53:34.366297       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 23:53:34.366313       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 23:53:34.366343       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 23:53:34.366811       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 23:53:34.367219       1 server.go:661] "Version info" version="v1.24.3"
	I0725 23:53:34.367279       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:53:34.368929       1 config.go:317] "Starting service config controller"
	I0725 23:53:34.368980       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 23:53:34.369003       1 config.go:226] "Starting endpoint slice config controller"
	I0725 23:53:34.369008       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 23:53:34.370073       1 config.go:444] "Starting node config controller"
	I0725 23:53:34.370110       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 23:53:34.470686       1 shared_informer.go:262] Caches are synced for node config
	I0725 23:53:34.470745       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 23:53:34.470754       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fb877ae4dac2] <==
	* W0725 23:53:17.824747       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:17.824778       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:17.824671       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 23:53:17.824914       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 23:53:17.825194       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 23:53:17.825225       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 23:53:17.825324       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 23:53:17.825360       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 23:53:17.825531       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 23:53:17.825584       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 23:53:17.825624       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:17.825794       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:17.828310       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:17.828347       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:17.831705       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:17.831793       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:18.711755       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 23:53:18.711796       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 23:53:18.788102       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 23:53:18.788142       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 23:53:18.823433       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:18.823473       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:18.827332       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 23:53:18.827369       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0725 23:53:19.179220       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:48:41 UTC, end at Mon 2022-07-25 23:54:39 UTC. --
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.514271    9913 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.514315    9913 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.514343    9913 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.514368    9913 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.514408    9913 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538694    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjd7c\" (UniqueName: \"kubernetes.io/projected/fb1410cc-4f8d-414e-abf8-64f2efff1852-kube-api-access-mjd7c\") pod \"kubernetes-dashboard-5fd5574d9f-9c5cf\" (UID: \"fb1410cc-4f8d-414e-abf8-64f2efff1852\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-9c5cf"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538754    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d89a226-d4b6-4543-9b95-c04b32e36bb3-kube-proxy\") pod \"kube-proxy-r8xpz\" (UID: \"9d89a226-d4b6-4543-9b95-c04b32e36bb3\") " pod="kube-system/kube-proxy-r8xpz"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538773    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d89a226-d4b6-4543-9b95-c04b32e36bb3-xtables-lock\") pod \"kube-proxy-r8xpz\" (UID: \"9d89a226-d4b6-4543-9b95-c04b32e36bb3\") " pod="kube-system/kube-proxy-r8xpz"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538794    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e4b5868d-0220-4d63-8b47-1ed865b090cc-tmp-dir\") pod \"metrics-server-5c6f97fb75-p6xmp\" (UID: \"e4b5868d-0220-4d63-8b47-1ed865b090cc\") " pod="kube-system/metrics-server-5c6f97fb75-p6xmp"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538810    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c8dd765-07cc-442c-a097-c898019f7c02-config-volume\") pod \"coredns-6d4b75cb6d-pk97r\" (UID: \"5c8dd765-07cc-442c-a097-c898019f7c02\") " pod="kube-system/coredns-6d4b75cb6d-pk97r"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538830    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7npkz\" (UniqueName: \"kubernetes.io/projected/acdd6709-c55c-4389-9025-5a4541349682-kube-api-access-7npkz\") pod \"dashboard-metrics-scraper-dffd48c4c-7dmwv\" (UID: \"acdd6709-c55c-4389-9025-5a4541349682\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-7dmwv"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538847    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gqpr\" (UniqueName: \"kubernetes.io/projected/9d89a226-d4b6-4543-9b95-c04b32e36bb3-kube-api-access-4gqpr\") pod \"kube-proxy-r8xpz\" (UID: \"9d89a226-d4b6-4543-9b95-c04b32e36bb3\") " pod="kube-system/kube-proxy-r8xpz"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538909    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/acdd6709-c55c-4389-9025-5a4541349682-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-7dmwv\" (UID: \"acdd6709-c55c-4389-9025-5a4541349682\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-7dmwv"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538939    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d89a226-d4b6-4543-9b95-c04b32e36bb3-lib-modules\") pod \"kube-proxy-r8xpz\" (UID: \"9d89a226-d4b6-4543-9b95-c04b32e36bb3\") " pod="kube-system/kube-proxy-r8xpz"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538967    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbs84\" (UniqueName: \"kubernetes.io/projected/96b01ade-dad8-4551-a42e-ec5920059ae9-kube-api-access-zbs84\") pod \"storage-provisioner\" (UID: \"96b01ade-dad8-4551-a42e-ec5920059ae9\") " pod="kube-system/storage-provisioner"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539009    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f622\" (UniqueName: \"kubernetes.io/projected/e4b5868d-0220-4d63-8b47-1ed865b090cc-kube-api-access-6f622\") pod \"metrics-server-5c6f97fb75-p6xmp\" (UID: \"e4b5868d-0220-4d63-8b47-1ed865b090cc\") " pod="kube-system/metrics-server-5c6f97fb75-p6xmp"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539029    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7rw\" (UniqueName: \"kubernetes.io/projected/5c8dd765-07cc-442c-a097-c898019f7c02-kube-api-access-nc7rw\") pod \"coredns-6d4b75cb6d-pk97r\" (UID: \"5c8dd765-07cc-442c-a097-c898019f7c02\") " pod="kube-system/coredns-6d4b75cb6d-pk97r"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539063    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/96b01ade-dad8-4551-a42e-ec5920059ae9-tmp\") pod \"storage-provisioner\" (UID: \"96b01ade-dad8-4551-a42e-ec5920059ae9\") " pod="kube-system/storage-provisioner"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539166    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fb1410cc-4f8d-414e-abf8-64f2efff1852-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-9c5cf\" (UID: \"fb1410cc-4f8d-414e-abf8-64f2efff1852\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-9c5cf"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539251    9913 reconciler.go:157] "Reconciler: start to sync state"
	Jul 25 23:54:37 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:37.690683    9913 request.go:601] Waited for 1.088878238s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 25 23:54:37 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:37.719258    9913 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220725164719-14919\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220725164719-14919"
	Jul 25 23:54:37 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:37.950548    9913 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220725164719-14919\" already exists" pod="kube-system/kube-apiserver-no-preload-20220725164719-14919"
	Jul 25 23:54:38 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:38.094978    9913 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220725164719-14919\" already exists" pod="kube-system/etcd-no-preload-20220725164719-14919"
	Jul 25 23:54:38 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:38.356192    9913 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220725164719-14919\" already exists" pod="kube-system/kube-scheduler-no-preload-20220725164719-14919"
	
	* 
	* ==> kubernetes-dashboard [1187dc7cc8b1] <==
	* 2022/07/25 23:53:47 Using namespace: kubernetes-dashboard
	2022/07/25 23:53:47 Using in-cluster config to connect to apiserver
	2022/07/25 23:53:47 Using secret token for csrf signing
	2022/07/25 23:53:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 23:53:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 23:53:47 Successful initial request to the apiserver, version: v1.24.3
	2022/07/25 23:53:47 Generating JWE encryption key
	2022/07/25 23:53:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 23:53:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 23:53:47 Initializing JWE encryption key from synchronized object
	2022/07/25 23:53:47 Creating in-cluster Sidecar client
	2022/07/25 23:53:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 23:53:47 Serving insecurely on HTTP port: 9090
	2022/07/25 23:54:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 23:53:47 Starting overwatch
	
	* 
	* ==> storage-provisioner [93d6948c9c49] <==
	* I0725 23:53:36.423963       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 23:53:36.448276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 23:53:36.448305       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 23:53:36.462121       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 23:53:36.463029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7071249-ea7e-4a53-9ca0-ca8a680bc065", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220725164719-14919_0d088afd-fb98-473e-9519-720be122e2d4 became leader
	I0725 23:53:36.463192       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220725164719-14919_0d088afd-fb98-473e-9519-720be122e2d4!
	I0725 23:53:36.564081       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220725164719-14919_0d088afd-fb98-473e-9519-720be122e2d4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220725164719-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-p6xmp
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220725164719-14919 describe pod metrics-server-5c6f97fb75-p6xmp
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220725164719-14919 describe pod metrics-server-5c6f97fb75-p6xmp: exit status 1 (288.416022ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-p6xmp" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220725164719-14919 describe pod metrics-server-5c6f97fb75-p6xmp: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220725164719-14919
helpers_test.go:235: (dbg) docker inspect no-preload-20220725164719-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2",
	        "Created": "2022-07-25T23:47:21.494738173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235566,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:48:41.371128777Z",
	            "FinishedAt": "2022-07-25T23:48:39.424396366Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2/hosts",
	        "LogPath": "/var/lib/docker/containers/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2/ddc44e208687322b1292a12463caf9695d8555c685f97d220083b3d6b55319b2-json.log",
	        "Name": "/no-preload-20220725164719-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220725164719-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220725164719-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5579c431d7f5e88eed0b2c9884c4b6e7591fa8d54b9274b5fe9a8404a4863192-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5579c431d7f5e88eed0b2c9884c4b6e7591fa8d54b9274b5fe9a8404a4863192/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5579c431d7f5e88eed0b2c9884c4b6e7591fa8d54b9274b5fe9a8404a4863192/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5579c431d7f5e88eed0b2c9884c4b6e7591fa8d54b9274b5fe9a8404a4863192/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220725164719-14919",
	                "Source": "/var/lib/docker/volumes/no-preload-20220725164719-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220725164719-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220725164719-14919",
	                "name.minikube.sigs.k8s.io": "no-preload-20220725164719-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4905c443ddaead38549ceeb1061d8ecf605772579655f9127b0e1ba8b821ba9b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50685"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50686"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50687"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50688"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50689"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4905c443ddae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220725164719-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ddc44e208687",
	                        "no-preload-20220725164719-14919"
	                    ],
	                    "NetworkID": "782d8a0b933ddac573007847cec70a531eee56f5c5e0713703bef5697069ae1d",
	                    "EndpointID": "f02c7e5ed58b7f718bc5210901e3a8c34b46ddb34a98d28c68bc204396a05cad",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220725164719-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220725164719-14919 logs -n 25: (2.840629637s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p calico-20220725163046-14919                    | calico-20220725163046-14919             | jenkins | v1.26.0 | 25 Jul 22 16:44 PDT | 25 Jul 22 16:44 PDT |
	| start   | -p bridge-20220725163045-14919                    | bridge-20220725163045-14919             | jenkins | v1.26.0 | 25 Jul 22 16:44 PDT | 25 Jul 22 16:44 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p false-20220725163046-14919                     | false-20220725163046-14919              | jenkins | v1.26.0 | 25 Jul 22 16:44 PDT | 25 Jul 22 16:44 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220725163045-14919                    | bridge-20220725163045-14919             | jenkins | v1.26.0 | 25 Jul 22 16:44 PDT | 25 Jul 22 16:44 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p false-20220725163046-14919                     | false-20220725163046-14919              | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	| delete  | -p bridge-20220725163045-14919                    | bridge-20220725163045-14919             | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	| start   | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| start   | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:45 PDT | 25 Jul 22 16:45 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT | 25 Jul 22 16:46 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:47 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:53 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:50 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 16:51:53
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 16:51:53.294201   30645 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:51:53.294366   30645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:51:53.294371   30645 out.go:309] Setting ErrFile to fd 2...
	I0725 16:51:53.294375   30645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:51:53.294471   30645 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:51:53.294941   30645 out.go:303] Setting JSON to false
	I0725 16:51:53.309887   30645 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10036,"bootTime":1658783077,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:51:53.309984   30645 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:51:53.331402   30645 out.go:177] * [old-k8s-version-20220725164610-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:51:53.373600   30645 notify.go:193] Checking for updates...
	I0725 16:51:53.395513   30645 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:51:53.417111   30645 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:51:53.438407   30645 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:51:53.459736   30645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:51:53.481553   30645 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:51:53.504223   30645 config.go:178] Loaded profile config "old-k8s-version-20220725164610-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:51:53.526315   30645 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0725 16:51:53.547450   30645 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:51:53.618847   30645 docker.go:137] docker version: linux-20.10.17
	I0725 16:51:53.618995   30645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:51:53.753067   30645 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:51:53.688740284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:51:53.796714   30645 out.go:177] * Using the docker driver based on existing profile
	I0725 16:51:53.817466   30645 start.go:284] selected driver: docker
	I0725 16:51:53.817494   30645 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:53.817613   30645 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:51:53.820630   30645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:51:53.953927   30645 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:51:53.891132742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:51:53.954103   30645 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:51:53.954124   30645 cni.go:95] Creating CNI manager for ""
	I0725 16:51:53.954135   30645 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:51:53.954143   30645 start_flags.go:310] config:
	{Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:53.997664   30645 out.go:177] * Starting control plane node old-k8s-version-20220725164610-14919 in cluster old-k8s-version-20220725164610-14919
	I0725 16:51:54.018754   30645 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:51:54.039707   30645 out.go:177] * Pulling base image ...
	I0725 16:51:54.082764   30645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:51:54.082795   30645 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:51:54.082852   30645 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 16:51:54.082881   30645 cache.go:57] Caching tarball of preloaded images
	I0725 16:51:54.083082   30645 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:51:54.083106   30645 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 16:51:54.084260   30645 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json ...
	I0725 16:51:54.147078   30645 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:51:54.147095   30645 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:51:54.147107   30645 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:51:54.147181   30645 start.go:370] acquiring machines lock for old-k8s-version-20220725164610-14919: {Name:mk039986a3467f394c941873ee88acd0fb616596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:51:54.147261   30645 start.go:374] acquired machines lock for "old-k8s-version-20220725164610-14919" in 61.057µs
	I0725 16:51:54.147278   30645 start.go:95] Skipping create...Using existing machine configuration
	I0725 16:51:54.147288   30645 fix.go:55] fixHost starting: 
	I0725 16:51:54.147527   30645 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:51:54.215341   30645 fix.go:103] recreateIfNeeded on old-k8s-version-20220725164610-14919: state=Stopped err=<nil>
	W0725 16:51:54.215374   30645 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 16:51:54.259242   30645 out.go:177] * Restarting existing docker container for "old-k8s-version-20220725164610-14919" ...
	I0725 16:51:50.322882   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:52.874717   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:54.284887   30645 cli_runner.go:164] Run: docker start old-k8s-version-20220725164610-14919
	I0725 16:51:54.645993   30645 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725164610-14919 --format={{.State.Status}}
	I0725 16:51:54.722808   30645 kic.go:415] container "old-k8s-version-20220725164610-14919" state is running.
	I0725 16:51:54.723439   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:54.808300   30645 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/config.json ...
	I0725 16:51:54.808762   30645 machine.go:88] provisioning docker machine ...
	I0725 16:51:54.808790   30645 ubuntu.go:169] provisioning hostname "old-k8s-version-20220725164610-14919"
	I0725 16:51:54.808863   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:54.891385   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:54.891620   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:54.891634   30645 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220725164610-14919 && echo "old-k8s-version-20220725164610-14919" | sudo tee /etc/hostname
	I0725 16:51:55.024662   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220725164610-14919
	
	I0725 16:51:55.024757   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.103341   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.103525   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.103544   30645 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220725164610-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220725164610-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220725164610-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:51:55.230047   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:51:55.230076   30645 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:51:55.230107   30645 ubuntu.go:177] setting up certificates
	I0725 16:51:55.230119   30645 provision.go:83] configureAuth start
	I0725 16:51:55.230190   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:55.301676   30645 provision.go:138] copyHostCerts
	I0725 16:51:55.301768   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:51:55.301778   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:51:55.301894   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:51:55.302095   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:51:55.302104   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:51:55.302175   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:51:55.302315   30645 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:51:55.302321   30645 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:51:55.302379   30645 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:51:55.302507   30645 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220725164610-14919 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220725164610-14919]
	I0725 16:51:55.405165   30645 provision.go:172] copyRemoteCerts
	I0725 16:51:55.405225   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:51:55.405293   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.477166   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:55.565264   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:51:55.582096   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0725 16:51:55.599314   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 16:51:55.616047   30645 provision.go:86] duration metric: configureAuth took 385.912561ms
	I0725 16:51:55.616059   30645 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:51:55.616211   30645 config.go:178] Loaded profile config "old-k8s-version-20220725164610-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 16:51:55.616261   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.687491   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.687629   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.687638   30645 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:51:55.809152   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:51:55.809170   30645 ubuntu.go:71] root file system type: overlay
	I0725 16:51:55.809333   30645 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:51:55.809407   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:55.886743   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:55.886909   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:55.886957   30645 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:51:56.015134   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:51:56.015230   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.087087   30645 main.go:134] libmachine: Using SSH client type: native
	I0725 16:51:56.087253   30645 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 50823 <nil> <nil>}
	I0725 16:51:56.087280   30645 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:51:56.212027   30645 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:51:56.212044   30645 machine.go:91] provisioned docker machine in 1.403264453s
	I0725 16:51:56.212055   30645 start.go:307] post-start starting for "old-k8s-version-20220725164610-14919" (driver="docker")
	I0725 16:51:56.212061   30645 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:51:56.212133   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:51:56.212177   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.283031   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.372939   30645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:51:56.376433   30645 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:51:56.376447   30645 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:51:56.376454   30645 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:51:56.376458   30645 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:51:56.376467   30645 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:51:56.376572   30645 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:51:56.376727   30645 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:51:56.376875   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:51:56.383744   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:51:56.400937   30645 start.go:310] post-start completed in 188.872215ms
	I0725 16:51:56.401013   30645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:51:56.401059   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.472425   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.558421   30645 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:51:56.562865   30645 fix.go:57] fixHost completed within 2.41556105s
	I0725 16:51:56.562873   30645 start.go:82] releasing machines lock for "old-k8s-version-20220725164610-14919", held for 2.415589014s
	I0725 16:51:56.562940   30645 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725164610-14919
	I0725 16:51:56.634630   30645 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:51:56.634634   30645 ssh_runner.go:195] Run: systemctl --version
	I0725 16:51:56.634711   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.634710   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:56.712937   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:56.715060   30645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/old-k8s-version-20220725164610-14919/id_rsa Username:docker}
	I0725 16:51:57.028274   30645 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:51:57.039409   30645 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:51:57.039463   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:51:57.050978   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:51:57.064294   30645 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:51:57.131183   30645 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:51:57.197441   30645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:51:57.258729   30645 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:51:57.458205   30645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:51:57.493961   30645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:51:57.573579   30645 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 16:51:57.573720   30645 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220725164610-14919 dig +short host.docker.internal
	I0725 16:51:57.708897   30645 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:51:57.708998   30645 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:51:57.713113   30645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:51:57.723064   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:57.796445   30645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 16:51:57.796515   30645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:51:57.828170   30645 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:51:57.828195   30645 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:51:57.828273   30645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:51:57.862686   30645 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 16:51:57.862711   30645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:51:57.862784   30645 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:51:57.934841   30645 cni.go:95] Creating CNI manager for ""
	I0725 16:51:57.934857   30645 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:51:57.934882   30645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:51:57.934897   30645 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220725164610-14919 NodeName:old-k8s-version-20220725164610-14919 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:51:57.934999   30645 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220725164610-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220725164610-14919
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:51:57.935085   30645 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220725164610-14919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:51:57.935149   30645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 16:51:57.942882   30645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:51:57.942933   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:51:57.949836   30645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0725 16:51:57.962118   30645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:51:57.974768   30645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 16:51:57.987611   30645 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:51:57.991547   30645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:51:58.001422   30645 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919 for IP: 192.168.67.2
	I0725 16:51:58.001534   30645 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:51:58.001584   30645 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:51:58.001665   30645 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/client.key
	I0725 16:51:58.001725   30645 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key.c7fa3a9e
	I0725 16:51:58.001774   30645 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key
	I0725 16:51:58.001977   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:51:58.002018   30645 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:51:58.002033   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:51:58.002065   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:51:58.002099   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:51:58.002130   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:51:58.002200   30645 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:51:58.002745   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:51:58.019176   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 16:51:58.035937   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:51:58.052722   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/old-k8s-version-20220725164610-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:51:58.069150   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:51:58.086282   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:51:58.104583   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:51:58.122151   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:51:58.138902   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:51:58.155678   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:51:58.172462   30645 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:51:58.189680   30645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:51:58.202927   30645 ssh_runner.go:195] Run: openssl version
	I0725 16:51:58.208487   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:51:58.216327   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.220281   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.220320   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:51:58.225423   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:51:58.232569   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:51:58.240681   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.246603   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.246655   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:51:58.252424   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:51:58.259635   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:51:58.267350   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.271022   30645 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.271059   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:51:58.276368   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:51:58.285978   30645 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220725164610-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725164610-14919 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:51:58.286085   30645 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:51:55.324035   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:57.821695   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:59.822225   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:51:58.315858   30645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:51:58.326514   30645 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 16:51:58.326531   30645 kubeadm.go:626] restartCluster start
	I0725 16:51:58.326585   30645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 16:51:58.333523   30645 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.333587   30645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725164610-14919
	I0725 16:51:58.406233   30645 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220725164610-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:51:58.406423   30645 kubeconfig.go:127] "old-k8s-version-20220725164610-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 16:51:58.406758   30645 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:51:58.408147   30645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 16:51:58.416141   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.416194   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.424141   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.624252   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.624449   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.634727   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:58.824496   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:58.824556   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:58.833401   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.024564   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.024765   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.036943   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.224262   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.224449   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.234247   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.426277   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.426421   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.436848   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.624325   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.624444   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.634776   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:51:59.824436   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:51:59.824539   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:51:59.833466   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.024667   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.024784   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.034119   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.226332   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.226493   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.237410   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.424816   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.424991   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.435741   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.624358   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.624553   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.634929   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:00.824246   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:00.824311   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:00.833267   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.025582   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.025682   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.036617   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.226302   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.226523   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.237134   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.424681   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.424896   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.434950   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.434960   30645 api_server.go:165] Checking apiserver status ...
	I0725 16:52:01.435004   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:52:01.443251   30645 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:52:01.443262   30645 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 16:52:01.443270   30645 kubeadm.go:1092] stopping kube-system containers ...
	I0725 16:52:01.443330   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:52:01.472271   30645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 16:52:01.482849   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:52:01.490579   30645 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Jul 25 23:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jul 25 23:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Jul 25 23:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jul 25 23:48 /etc/kubernetes/scheduler.conf
	
	I0725 16:52:01.490646   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 16:52:01.497991   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 16:52:01.505650   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 16:52:01.513404   30645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 16:52:01.520481   30645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:52:01.528605   30645 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 16:52:01.528616   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:01.582488   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.177208   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.396495   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.452157   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:52:02.507122   30645 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:52:02.507183   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:03.017988   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:01.823344   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:04.322726   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:03.516813   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:04.016024   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:04.516243   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:05.016052   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:05.516842   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:06.018016   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:06.516243   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:07.016833   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:07.516509   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:08.018237   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:06.821972   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:08.822475   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:08.516285   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:09.018225   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:09.516196   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:10.016108   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:10.518092   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:11.016235   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:11.516051   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:12.017661   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:12.517835   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:13.017094   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:11.324833   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:13.821393   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:13.517087   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:14.016089   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:14.516418   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:15.016429   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:15.516149   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:16.016347   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:16.516154   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:17.016835   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:17.516145   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:18.016344   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:15.823039   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:17.824060   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:18.516408   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:19.016498   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:19.517496   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:20.016992   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:20.516251   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:21.016222   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:21.517681   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:22.016475   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:22.516287   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:23.018246   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:20.324836   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:22.822073   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:24.822724   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:23.516453   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:24.016928   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:24.518267   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:25.016180   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:25.517130   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:26.016427   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:26.516198   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:27.018318   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:27.518273   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:28.017144   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:26.823978   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:29.324885   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:28.517115   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:29.016589   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:29.516148   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:30.018359   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:30.516196   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:31.016729   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:31.516466   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:32.016321   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:32.516187   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:33.016955   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:31.823607   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:34.323121   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:33.518380   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:34.016250   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:34.518380   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:35.017698   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:35.516226   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:36.016845   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:36.517175   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:37.016458   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:37.518343   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:38.017221   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:36.823814   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:38.824757   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:38.516631   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:39.018346   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:39.517031   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:40.016587   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:40.518374   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:41.017168   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:41.516254   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:42.016786   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:42.518371   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:43.016708   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:41.324898   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:43.821522   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:43.517350   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:44.016879   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:44.516359   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:45.016326   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:45.517079   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:46.018104   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:46.516554   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:47.016350   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:47.516869   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:48.016960   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:45.822541   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:48.322265   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:48.518539   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:49.016387   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:49.518485   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:50.016779   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:50.516308   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:51.016390   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:51.516855   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:52.016682   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:52.516798   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:53.017157   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:50.325107   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:52.822776   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:54.822863   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:53.516791   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:54.018461   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:54.518509   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:55.016394   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:55.518239   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:56.016393   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:56.516649   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:57.018403   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:57.518492   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:58.016728   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:57.322195   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:59.325110   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:52:58.516610   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:59.016695   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:52:59.516374   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:00.018527   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:00.516554   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:01.016461   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:01.518568   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:02.018357   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:02.516570   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:02.551458   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.551470   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:02.551529   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:02.580662   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.580676   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:02.580736   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:02.609061   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.609077   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:02.609153   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:02.637777   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.637789   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:02.637848   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:02.668016   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.668032   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:02.668098   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:02.695681   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.695695   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:02.695759   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:02.724166   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.724179   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:02.724241   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:02.752726   30645 logs.go:274] 0 containers: []
	W0725 16:53:02.752738   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:02.752745   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:02.752752   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:02.766718   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:02.766729   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:01.823541   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:03.823599   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:04.817904   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051150373s)
	I0725 16:53:04.818052   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:04.818058   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:04.859354   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:04.859367   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:04.872868   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:04.872886   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:04.925729   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:07.427981   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:07.518459   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:07.547888   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.547903   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:07.547963   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:07.577077   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.577088   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:07.577149   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:07.605370   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.605382   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:07.605438   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:07.634582   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.634594   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:07.634664   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:07.662717   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.662730   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:07.662796   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:07.690179   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.690191   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:07.690247   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:07.718778   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.718797   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:07.718860   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:07.750543   30645 logs.go:274] 0 containers: []
	W0725 16:53:07.750557   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:07.750566   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:07.750582   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:07.813932   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:07.813946   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:07.813953   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:07.830288   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:07.830306   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:06.323264   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:08.822901   30296 pod_ready.go:102] pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:09.316485   30296 pod_ready.go:81] duration metric: took 4m0.003871836s waiting for pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace to be "Ready" ...
	E0725 16:53:09.316502   30296 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-xvjk7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 16:53:09.316516   30296 pod_ready.go:38] duration metric: took 4m13.56403836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:53:09.316548   30296 kubeadm.go:630] restartCluster took 4m23.75685988s
	W0725 16:53:09.316641   30296 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 16:53:09.316663   30296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 16:53:11.757360   30296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.440666286s)
	I0725 16:53:11.757423   30296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:53:11.767435   30296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:53:11.775142   30296 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:53:11.775192   30296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:53:11.782840   30296 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:53:11.782879   30296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:53:12.113028   30296 out.go:204]   - Generating certificates and keys ...
	I0725 16:53:09.887017   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056682522s)
	I0725 16:53:09.887208   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:09.887216   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:09.934241   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:09.934269   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:12.447495   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:12.517256   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:12.548709   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.548724   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:12.548801   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:12.581560   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.581573   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:12.581636   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:12.613258   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.613277   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:12.613356   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:12.645116   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.645132   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:12.645192   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:12.678405   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.678430   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:12.678496   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:12.709850   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.709862   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:12.709929   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:12.739704   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.739717   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:12.739780   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:12.771373   30645 logs.go:274] 0 containers: []
	W0725 16:53:12.771390   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:12.771397   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:12.771409   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:13.124509   30296 out.go:204]   - Booting up control plane ...
	I0725 16:53:14.832595   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061157284s)
	I0725 16:53:14.832749   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:14.832760   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:14.882568   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:14.882589   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:14.894614   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:14.894627   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:14.964822   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:14.964845   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:14.964855   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:17.480696   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:17.516779   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:17.560432   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.560445   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:17.560504   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:17.590394   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.590408   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:17.590480   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:17.620155   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.620169   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:17.620234   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:17.651346   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.651376   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:17.651448   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:17.683049   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.683062   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:17.683121   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:17.720876   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.720905   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:17.720964   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:17.768214   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.768254   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:17.768357   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:17.800978   30645 logs.go:274] 0 containers: []
	W0725 16:53:17.800991   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:17.800999   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:17.801005   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:17.814855   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:17.814871   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:20.175651   30296 out.go:204]   - Configuring RBAC rules ...
	I0725 16:53:19.878600   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063699299s)
	I0725 16:53:19.878715   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:19.878726   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:19.927808   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:19.927830   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:19.942138   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:19.942177   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:20.000061   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:22.501063   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:22.516620   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:22.546166   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.546178   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:22.546235   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:22.574812   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.574824   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:22.574886   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:22.604962   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.604974   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:22.605036   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:22.636264   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.636278   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:22.636339   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:22.665920   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.665932   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:22.665993   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:22.696167   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.696179   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:22.696236   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:22.729381   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.729392   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:22.729454   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:22.768159   30645 logs.go:274] 0 containers: []
	W0725 16:53:22.768172   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:22.768207   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:22.768215   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:22.813804   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:22.813818   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:22.826686   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:22.826700   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:22.889943   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:22.889958   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:22.889964   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:22.905871   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:22.905885   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:20.554422   30296 cni.go:95] Creating CNI manager for ""
	I0725 16:53:20.554435   30296 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:53:20.554455   30296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 16:53:20.554509   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b minikube.k8s.io/name=no-preload-20220725164719-14919 minikube.k8s.io/updated_at=2022_07_25T16_53_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:20.554518   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:20.815468   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:20.815488   30296 ops.go:34] apiserver oom_adj: -16
	I0725 16:53:21.371512   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:21.872708   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:22.372928   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:22.871252   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:23.371865   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:23.872764   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:24.372857   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:24.871534   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:24.961550   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055639315s)
	I0725 16:53:27.462514   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:27.516705   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:27.547013   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.547025   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:27.547088   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:27.575083   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.575095   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:27.575151   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:27.607755   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.607767   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:27.607822   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:27.636173   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.636184   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:27.636251   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:27.664856   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.664867   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:27.664930   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:27.695642   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.695655   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:27.695717   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:27.725344   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.725358   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:27.725417   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:27.754182   30645 logs.go:274] 0 containers: []
	W0725 16:53:27.754195   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:27.754202   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:27.754208   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:27.767896   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:27.767911   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:27.824064   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:27.824076   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:27.824083   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:27.838119   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:27.838131   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:25.371471   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:25.872363   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:26.372010   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:26.871172   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:27.371984   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:27.871600   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:28.371423   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:28.872789   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:29.372643   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:29.872028   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:29.892047   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053889683s)
	I0725 16:53:29.892158   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:29.892165   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:32.435110   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:32.516701   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:32.562525   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.562538   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:32.562604   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:32.599075   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.599087   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:32.599145   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:32.640588   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.640615   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:32.640684   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:32.675235   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.675248   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:32.675311   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:32.711380   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.711392   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:32.711462   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:32.745360   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.745373   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:32.745433   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:32.782468   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.782484   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:32.782569   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:32.815537   30645 logs.go:274] 0 containers: []
	W0725 16:53:32.815551   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:32.815557   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:32.815565   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:32.828567   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:32.828584   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:32.884919   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:32.884933   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:32.884941   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:32.900762   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:32.900776   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:30.373362   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:30.873259   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:31.373357   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:31.871239   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:32.372542   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:32.871171   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:33.372834   30296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 16:53:33.432185   30296 kubeadm.go:1045] duration metric: took 12.877635728s to wait for elevateKubeSystemPrivileges.
	I0725 16:53:33.432203   30296 kubeadm.go:397] StartCluster complete in 4m47.911603505s
	I0725 16:53:33.432223   30296 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:53:33.432300   30296 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:53:33.432839   30296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:53:33.947550   30296 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220725164719-14919" rescaled to 1
	I0725 16:53:33.947586   30296 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 16:53:33.947600   30296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 16:53:33.947630   30296 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 16:53:33.970331   30296 out.go:177] * Verifying Kubernetes components...
	I0725 16:53:33.947781   30296 config.go:178] Loaded profile config "no-preload-20220725164719-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:53:33.970396   30296 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220725164719-14919"
	I0725 16:53:33.970401   30296 addons.go:65] Setting dashboard=true in profile "no-preload-20220725164719-14919"
	I0725 16:53:33.970409   30296 addons.go:65] Setting metrics-server=true in profile "no-preload-20220725164719-14919"
	I0725 16:53:33.970413   30296 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220725164719-14919"
	I0725 16:53:34.031972   30296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220725164719-14919"
	I0725 16:53:34.031978   30296 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220725164719-14919"
	I0725 16:53:34.031985   30296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:53:34.031986   30296 addons.go:153] Setting addon metrics-server=true in "no-preload-20220725164719-14919"
	I0725 16:53:34.031983   30296 addons.go:153] Setting addon dashboard=true in "no-preload-20220725164719-14919"
	W0725 16:53:34.031995   30296 addons.go:162] addon metrics-server should already be in state true
	W0725 16:53:34.031999   30296 addons.go:162] addon storage-provisioner should already be in state true
	W0725 16:53:34.032003   30296 addons.go:162] addon dashboard should already be in state true
	I0725 16:53:34.032038   30296 host.go:66] Checking if "no-preload-20220725164719-14919" exists ...
	I0725 16:53:34.032039   30296 host.go:66] Checking if "no-preload-20220725164719-14919" exists ...
	I0725 16:53:34.032073   30296 host.go:66] Checking if "no-preload-20220725164719-14919" exists ...
	I0725 16:53:34.032281   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.033354   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.033363   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.033360   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.043603   30296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 16:53:34.057441   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.186692   30296 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 16:53:34.207298   30296 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 16:53:34.208731   30296 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220725164719-14919"
	I0725 16:53:34.228297   30296 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	W0725 16:53:34.249380   30296 addons.go:162] addon default-storageclass should already be in state true
	I0725 16:53:34.261772   30296 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220725164719-14919" to be "Ready" ...
	I0725 16:53:34.270416   30296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 16:53:34.270448   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 16:53:34.270460   30296 host.go:66] Checking if "no-preload-20220725164719-14919" exists ...
	I0725 16:53:34.291176   30296 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 16:53:34.291315   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.291737   30296 cli_runner.go:164] Run: docker container inspect no-preload-20220725164719-14919 --format={{.State.Status}}
	I0725 16:53:34.296062   30296 node_ready.go:49] node "no-preload-20220725164719-14919" has status "Ready":"True"
	I0725 16:53:34.333550   30296 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 16:53:34.333555   30296 node_ready.go:38] duration metric: took 42.367204ms waiting for node "no-preload-20220725164719-14919" to be "Ready" ...
	I0725 16:53:34.333562   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 16:53:34.312459   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 16:53:34.333565   30296 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:53:34.333595   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 16:53:34.333628   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.333685   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.342717   30296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-pk97r" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:34.441662   30296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50685 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/no-preload-20220725164719-14919/id_rsa Username:docker}
	I0725 16:53:34.442634   30296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50685 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/no-preload-20220725164719-14919/id_rsa Username:docker}
	I0725 16:53:34.442716   30296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50685 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/no-preload-20220725164719-14919/id_rsa Username:docker}
	I0725 16:53:34.444590   30296 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 16:53:34.444602   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 16:53:34.444658   30296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725164719-14919
	I0725 16:53:34.526466   30296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50685 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/no-preload-20220725164719-14919/id_rsa Username:docker}
	I0725 16:53:34.608549   30296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 16:53:34.609187   30296 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 16:53:34.609201   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 16:53:34.611760   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 16:53:34.611786   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 16:53:34.634836   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 16:53:34.634860   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 16:53:34.638455   30296 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 16:53:34.638473   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 16:53:34.710670   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 16:53:34.710687   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 16:53:34.718410   30296 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 16:53:34.718428   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 16:53:34.727332   30296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 16:53:34.738119   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 16:53:34.738133   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 16:53:34.744747   30296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 16:53:34.823409   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 16:53:34.823440   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 16:53:34.918418   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 16:53:34.918433   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 16:53:35.013403   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 16:53:35.013429   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 16:53:35.111110   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 16:53:35.111127   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 16:53:35.114460   30296 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.070811321s)
	I0725 16:53:35.114484   30296 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 16:53:35.138037   30296 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 16:53:35.138060   30296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 16:53:35.230663   30296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 16:53:35.542338   30296 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220725164719-14919"
	I0725 16:53:36.412013   30296 pod_ready.go:102] pod "coredns-6d4b75cb6d-pk97r" in "kube-system" namespace has status "Ready":"False"
	I0725 16:53:36.847930   30296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.617220122s)
	I0725 16:53:36.872489   30296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 16:53:34.964971   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064168222s)
	I0725 16:53:34.965217   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:34.965226   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:37.509560   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:38.016974   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:38.049541   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.049558   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:38.049618   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:38.080721   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.080733   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:38.080816   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:38.109733   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.109744   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:38.109803   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:38.141301   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.141313   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:38.141400   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:38.172007   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.172020   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:38.172078   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:38.204450   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.204463   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:38.204520   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:38.234269   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.234281   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:38.234336   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:38.263197   30645 logs.go:274] 0 containers: []
	W0725 16:53:38.263210   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:38.263217   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:38.263223   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:36.893973   30296 addons.go:414] enableAddons completed in 2.946347446s
	I0725 16:53:38.857910   30296 pod_ready.go:92] pod "coredns-6d4b75cb6d-pk97r" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:38.857924   30296 pod_ready.go:81] duration metric: took 4.51514414s waiting for pod "coredns-6d4b75cb6d-pk97r" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:38.857932   30296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-zc96c" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.868381   30296 pod_ready.go:92] pod "coredns-6d4b75cb6d-zc96c" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:39.868398   30296 pod_ready.go:81] duration metric: took 1.010452431s waiting for pod "coredns-6d4b75cb6d-zc96c" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.868406   30296 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.900327   30296 pod_ready.go:92] pod "etcd-no-preload-20220725164719-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:39.900341   30296 pod_ready.go:81] duration metric: took 31.928357ms waiting for pod "etcd-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.900352   30296 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.905064   30296 pod_ready.go:92] pod "kube-apiserver-no-preload-20220725164719-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:39.905074   30296 pod_ready.go:81] duration metric: took 4.716348ms waiting for pod "kube-apiserver-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.905080   30296 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.910729   30296 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220725164719-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:39.910741   30296 pod_ready.go:81] duration metric: took 5.655476ms waiting for pod "kube-controller-manager-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:39.910748   30296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r8xpz" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:40.055516   30296 pod_ready.go:92] pod "kube-proxy-r8xpz" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:40.055529   30296 pod_ready.go:81] duration metric: took 144.773719ms waiting for pod "kube-proxy-r8xpz" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:40.055537   30296 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:40.457530   30296 pod_ready.go:92] pod "kube-scheduler-no-preload-20220725164719-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:53:40.457544   30296 pod_ready.go:81] duration metric: took 401.996265ms waiting for pod "kube-scheduler-no-preload-20220725164719-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:53:40.457550   30296 pod_ready.go:38] duration metric: took 6.123930085s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:53:40.457571   30296 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:53:40.457632   30296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:40.470111   30296 api_server.go:71] duration metric: took 6.522458607s to wait for apiserver process to appear ...
	I0725 16:53:40.470126   30296 api_server.go:87] waiting for apiserver healthz status ...
	I0725 16:53:40.470134   30296 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50689/healthz ...
	I0725 16:53:40.476144   30296 api_server.go:266] https://127.0.0.1:50689/healthz returned 200:
	ok
	I0725 16:53:40.477496   30296 api_server.go:140] control plane version: v1.24.3
	I0725 16:53:40.477505   30296 api_server.go:130] duration metric: took 7.374951ms to wait for apiserver health ...
	I0725 16:53:40.477510   30296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 16:53:40.658214   30296 system_pods.go:59] 9 kube-system pods found
	I0725 16:53:40.658228   30296 system_pods.go:61] "coredns-6d4b75cb6d-pk97r" [5c8dd765-07cc-442c-a097-c898019f7c02] Running
	I0725 16:53:40.658233   30296 system_pods.go:61] "coredns-6d4b75cb6d-zc96c" [d09478f3-429d-4f03-891b-19ac59672799] Running
	I0725 16:53:40.658237   30296 system_pods.go:61] "etcd-no-preload-20220725164719-14919" [888ae756-4b50-408b-9e35-272e796ae5d4] Running
	I0725 16:53:40.658241   30296 system_pods.go:61] "kube-apiserver-no-preload-20220725164719-14919" [f2572bd5-989c-414c-8cdb-f771c052fec7] Running
	I0725 16:53:40.658244   30296 system_pods.go:61] "kube-controller-manager-no-preload-20220725164719-14919" [31b0f2fc-9b4d-416d-b3da-c3d7c2038175] Running
	I0725 16:53:40.658248   30296 system_pods.go:61] "kube-proxy-r8xpz" [9d89a226-d4b6-4543-9b95-c04b32e36bb3] Running
	I0725 16:53:40.658251   30296 system_pods.go:61] "kube-scheduler-no-preload-20220725164719-14919" [b2d6b72d-19b5-463e-9d34-81719d09e606] Running
	I0725 16:53:40.658257   30296 system_pods.go:61] "metrics-server-5c6f97fb75-p6xmp" [e4b5868d-0220-4d63-8b47-1ed865b090cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 16:53:40.658262   30296 system_pods.go:61] "storage-provisioner" [96b01ade-dad8-4551-a42e-ec5920059ae9] Running
	I0725 16:53:40.658266   30296 system_pods.go:74] duration metric: took 180.75083ms to wait for pod list to return data ...
	I0725 16:53:40.658271   30296 default_sa.go:34] waiting for default service account to be created ...
	I0725 16:53:40.855061   30296 default_sa.go:45] found service account: "default"
	I0725 16:53:40.855072   30296 default_sa.go:55] duration metric: took 196.796ms for default service account to be created ...
	I0725 16:53:40.855082   30296 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 16:53:41.059771   30296 system_pods.go:86] 9 kube-system pods found
	I0725 16:53:41.059786   30296 system_pods.go:89] "coredns-6d4b75cb6d-pk97r" [5c8dd765-07cc-442c-a097-c898019f7c02] Running
	I0725 16:53:41.059791   30296 system_pods.go:89] "coredns-6d4b75cb6d-zc96c" [d09478f3-429d-4f03-891b-19ac59672799] Running
	I0725 16:53:41.059795   30296 system_pods.go:89] "etcd-no-preload-20220725164719-14919" [888ae756-4b50-408b-9e35-272e796ae5d4] Running
	I0725 16:53:41.059799   30296 system_pods.go:89] "kube-apiserver-no-preload-20220725164719-14919" [f2572bd5-989c-414c-8cdb-f771c052fec7] Running
	I0725 16:53:41.059807   30296 system_pods.go:89] "kube-controller-manager-no-preload-20220725164719-14919" [31b0f2fc-9b4d-416d-b3da-c3d7c2038175] Running
	I0725 16:53:41.059813   30296 system_pods.go:89] "kube-proxy-r8xpz" [9d89a226-d4b6-4543-9b95-c04b32e36bb3] Running
	I0725 16:53:41.059817   30296 system_pods.go:89] "kube-scheduler-no-preload-20220725164719-14919" [b2d6b72d-19b5-463e-9d34-81719d09e606] Running
	I0725 16:53:41.059823   30296 system_pods.go:89] "metrics-server-5c6f97fb75-p6xmp" [e4b5868d-0220-4d63-8b47-1ed865b090cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 16:53:41.059829   30296 system_pods.go:89] "storage-provisioner" [96b01ade-dad8-4551-a42e-ec5920059ae9] Running
	I0725 16:53:41.059835   30296 system_pods.go:126] duration metric: took 204.745744ms to wait for k8s-apps to be running ...
	I0725 16:53:41.059840   30296 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 16:53:41.059893   30296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:53:41.070559   30296 system_svc.go:56] duration metric: took 10.713694ms WaitForService to wait for kubelet.
	I0725 16:53:41.070573   30296 kubeadm.go:572] duration metric: took 7.122919076s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 16:53:41.070592   30296 node_conditions.go:102] verifying NodePressure condition ...
	I0725 16:53:41.255861   30296 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 16:53:41.255873   30296 node_conditions.go:123] node cpu capacity is 6
	I0725 16:53:41.255884   30296 node_conditions.go:105] duration metric: took 185.287059ms to run NodePressure ...
	I0725 16:53:41.255894   30296 start.go:216] waiting for startup goroutines ...
	I0725 16:53:41.289611   30296 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 16:53:41.311552   30296 out.go:177] * Done! kubectl is now configured to use "no-preload-20220725164719-14919" cluster and "default" namespace by default
	I0725 16:53:40.321875   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058620912s)
	I0725 16:53:40.321982   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:40.321997   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:40.368300   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:40.368320   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:40.382186   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:40.382201   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:40.442970   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:40.442981   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:40.442987   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:42.961513   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:43.017747   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:43.047988   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.048000   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:43.048060   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:43.082642   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.082655   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:43.082783   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:43.112812   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.112825   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:43.112882   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:43.142469   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.142480   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:43.142543   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:43.172983   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.172996   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:43.173055   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:43.202378   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.202390   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:43.202456   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:43.232448   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.232462   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:43.232525   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:43.262110   30645 logs.go:274] 0 containers: []
	W0725 16:53:43.262123   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:43.262132   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:43.262140   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:45.319732   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057561012s)
	I0725 16:53:45.319846   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:45.319854   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:45.365923   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:45.365943   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:45.379753   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:45.379771   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:45.457284   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:45.457297   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:45.457305   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:47.975040   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:48.018317   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:48.049476   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.049489   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:48.049548   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:48.078953   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.078965   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:48.079037   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:48.109058   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.109071   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:48.109129   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:48.139159   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.139172   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:48.139228   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:48.169256   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.169267   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:48.169325   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:48.201872   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.201885   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:48.201948   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:48.234103   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.234115   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:48.234178   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:48.266166   30645 logs.go:274] 0 containers: []
	W0725 16:53:48.266179   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:48.266186   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:48.266197   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:48.314601   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:48.318681   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:48.332826   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:48.332841   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:48.388055   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:48.388067   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:48.388075   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:48.402457   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:48.402469   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:50.456667   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054172699s)
	I0725 16:53:52.958273   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:53.018286   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:53.051254   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.051266   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:53.051325   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:53.080846   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.080858   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:53.080914   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:53.109160   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.109183   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:53.109257   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:53.137615   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.137628   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:53.137684   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:53.167697   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.167709   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:53.167765   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:53.198156   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.198169   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:53.198278   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:53.227704   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.227716   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:53.227773   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:53.257307   30645 logs.go:274] 0 containers: []
	W0725 16:53:53.257320   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:53.257327   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:53.257336   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:53.299296   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:53.317934   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:53.330698   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:53.330712   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:53.385054   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:53.385066   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:53.385073   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:53.399132   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:53.399145   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:53:55.451174   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052002587s)
	I0725 16:53:57.951589   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:53:58.016855   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:53:58.049205   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.049216   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:53:58.049274   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:53:58.079929   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.079941   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:53:58.080000   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:53:58.109713   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.109725   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:53:58.109785   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:53:58.138994   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.139008   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:53:58.139116   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:53:58.168661   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.168675   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:53:58.168733   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:53:58.197795   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.197807   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:53:58.197867   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:53:58.226708   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.226719   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:53:58.226777   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:53:58.255098   30645 logs.go:274] 0 containers: []
	W0725 16:53:58.255109   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:53:58.255116   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:53:58.255123   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:53:58.295859   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:53:58.317170   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:53:58.329926   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:53:58.329941   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:53:58.382781   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:53:58.382793   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:53:58.382826   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:53:58.397360   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:53:58.397372   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:00.450881   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053483262s)
	I0725 16:54:02.951232   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:03.018983   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:03.050556   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.050569   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:03.050627   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:03.079230   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.079242   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:03.079298   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:03.108412   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.108425   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:03.108483   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:03.136613   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.136626   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:03.136688   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:03.165794   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.165805   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:03.165862   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:03.194455   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.194471   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:03.194539   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:03.226412   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.226426   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:03.226490   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:03.261052   30645 logs.go:274] 0 containers: []
	W0725 16:54:03.261064   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:03.261072   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:03.261081   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:05.315384   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054277623s)
	I0725 16:54:05.315492   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:05.315500   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:05.354732   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:05.354744   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:05.366506   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:05.366519   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:05.419168   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:05.419178   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:05.419185   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:07.935013   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:08.017181   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:08.048536   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.048557   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:08.048619   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:08.080579   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.080592   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:08.080652   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:08.108274   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.108287   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:08.108346   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:08.138319   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.138331   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:08.138390   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:08.168384   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.168395   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:08.168452   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:08.198022   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.198034   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:08.198092   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:08.226920   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.226933   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:08.226991   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:08.257052   30645 logs.go:274] 0 containers: []
	W0725 16:54:08.257063   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:08.257070   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:08.257078   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:08.268657   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:08.268690   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:08.320782   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:08.320793   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:08.320799   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:08.334711   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:08.334722   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:10.390667   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05591657s)
	I0725 16:54:10.390776   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:10.390784   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:12.930154   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:13.016938   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:13.046701   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.046713   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:13.046769   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:13.076212   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.076225   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:13.076282   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:13.106089   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.106099   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:13.106147   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:13.136688   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.136702   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:13.136762   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:13.166341   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.166353   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:13.166412   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:13.194833   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.194844   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:13.194910   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:13.223450   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.223462   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:13.223522   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:13.253571   30645 logs.go:274] 0 containers: []
	W0725 16:54:13.253583   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:13.253590   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:13.253596   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:13.296069   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:13.296080   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:13.308497   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:13.317701   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:13.373112   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:13.373126   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:13.373135   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:13.387086   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:13.387099   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:15.443702   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056574496s)
	I0725 16:54:17.946094   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:18.019154   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:18.050260   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.050273   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:18.050335   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:18.079777   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.079789   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:18.079847   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:18.111380   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.111393   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:18.111445   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:18.143959   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.143969   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:18.144021   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:18.180312   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.180332   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:18.180399   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:18.215895   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.215911   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:18.215963   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:18.252789   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.252802   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:18.252852   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:18.290782   30645 logs.go:274] 0 containers: []
	W0725 16:54:18.290810   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:18.290818   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:18.290847   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:18.303512   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:18.317352   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:18.376087   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:18.376098   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:18.376106   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:18.390833   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:18.390853   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:20.449118   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05823903s)
	I0725 16:54:20.449231   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:20.449238   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:22.992397   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:23.017255   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:23.045826   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.045844   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:23.045915   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:23.075162   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.075174   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:23.075229   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:23.105247   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.105260   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:23.105315   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:23.134037   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.134056   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:23.134113   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:23.163197   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.163211   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:23.163269   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:23.192645   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.192657   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:23.192714   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:23.220793   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.220804   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:23.220863   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:23.250836   30645 logs.go:274] 0 containers: []
	W0725 16:54:23.250847   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:23.250854   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:23.250860   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:25.307612   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056726692s)
	I0725 16:54:25.307719   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:25.307726   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:25.346156   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:25.346168   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:25.358492   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:25.358504   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:25.410340   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:25.410351   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:25.410358   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:27.924097   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:28.017834   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:28.049566   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.049580   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:28.049646   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:28.079671   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.079685   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:28.079744   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:28.108629   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.108641   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:28.108696   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:28.137881   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.137893   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:28.137954   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:28.166821   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.166834   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:28.166898   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:28.196515   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.196527   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:28.196590   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:28.225959   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.225971   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:28.226028   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:28.254555   30645 logs.go:274] 0 containers: []
	W0725 16:54:28.254567   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:28.254574   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:28.254581   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:30.308050   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053443356s)
	I0725 16:54:30.308156   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:30.308162   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:30.347803   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:30.347816   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:30.360116   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:30.360128   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:30.413675   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:30.413687   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:30.413693   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:32.929655   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:33.019242   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:33.052472   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.052485   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:33.052542   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:33.081513   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.081531   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:33.081586   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:33.112328   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.112340   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:33.112399   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:33.140741   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.140755   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:33.140820   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:33.171364   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.171382   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:33.171441   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:33.203103   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.203116   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:33.203176   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:33.233444   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.233456   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:33.233522   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:33.265044   30645 logs.go:274] 0 containers: []
	W0725 16:54:33.265056   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:33.265063   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:33.265071   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:54:33.306110   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:54:33.317535   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:54:33.330969   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:54:33.330983   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:54:33.383185   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:54:33.383196   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:54:33.383205   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:54:33.396721   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:54:33.396739   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:54:35.470448   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.07368252s)
	I0725 16:54:37.970815   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:54:38.017644   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:54:38.047388   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.047404   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:54:38.047456   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:54:38.077940   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.077954   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:54:38.078049   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:54:38.111761   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.111773   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:54:38.111835   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:54:38.148082   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.148095   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:54:38.148162   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:54:38.180302   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.180314   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:54:38.180369   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:54:38.211612   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.211627   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:54:38.211690   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:54:38.241709   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.241720   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:54:38.241775   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:54:38.273560   30645 logs.go:274] 0 containers: []
	W0725 16:54:38.273574   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:54:38.273581   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:54:38.273588   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:48:41 UTC, end at Mon 2022-07-25 23:54:42 UTC. --
	Jul 25 23:53:10 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:10.884080291Z" level=info msg="ignoring event" container=829f65d4842bc79a48d1135be17c2992534537de97f9d73f8c9ce30adbfe4a28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:10 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:10.954167472Z" level=info msg="ignoring event" container=51b13151cda09a31c7f07b36e7af955cfdfa4c09f8c4870eab94f1bf12a5b18f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.038794525Z" level=info msg="ignoring event" container=98fe17bcba956ef7e47218b4d5bc668dc58a4e2af4c9d8dae663da40b1b26ffb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.110105825Z" level=info msg="ignoring event" container=92548eb878584890ad3b6d104da9daabf2c90ffa04d529d7e77b4fa21e5a9253 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.228089209Z" level=info msg="ignoring event" container=e51c868f6ca7aeb1a5b57e8fee62c6dbccc46b777a3d659cea5fc5048c45fb66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.297820545Z" level=info msg="ignoring event" container=8db9fb247dba9757e04a108b4b71f18f864c4d144c4607357d0863fe12df5b21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:11 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:11.377998687Z" level=info msg="ignoring event" container=66e62855ca93d0c2525f04999ef9cf81f26612ee5586ed17983cd5764ed17f02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:36 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:36.571409006Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:36 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:36.571431939Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:36 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:36.573611331Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:38 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:38.181780075Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 23:53:38 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:38.494867374Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.117737604Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.177160806Z" level=info msg="ignoring event" container=b3338e80827364cee061681880366d77453088d80ea5d4d0649216c6dfa4abab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.620389575Z" level=info msg="ignoring event" container=d78e67bf19f151d984686a3944cf7b8f4e07f1ed8150c85d7e72538359ff65f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.720606079Z" level=info msg="ignoring event" container=112d188f94f9b91bd7b740e94cabea00e1f9b8f861ab97c0485ce66f2d0d2222 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:42 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:42.785404270Z" level=info msg="ignoring event" container=3e8a6b8ba991b939651b7a9f06182d7460cd18d48d1d58527096504562dd58b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:53:49 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:49.640069131Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:49 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:49.640520790Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:53:49 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:53:49.641733473Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:54:00 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:54:00.763004778Z" level=info msg="ignoring event" container=8fb3d6ea2c67df0206acc1e6d0beac72517f3ba4765f4954b0a044d792ebd6fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 23:54:39 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:54:39.232223902Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:54:39 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:54:39.232268649Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:54:39 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:54:39.280720016Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 23:54:39 no-preload-20220725164719-14919 dockerd[562]: time="2022-07-25T23:54:39.945051488Z" level=info msg="ignoring event" container=b1458e1bcf52755950ace3d26eda5dcfd9635e4cfa9c44a1cce7182c89221c19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	b1458e1bcf527       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   3                   e3479f6304fca
	1187dc7cc8b13       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   56 seconds ago       Running             kubernetes-dashboard        0                   b04cd97d9099a
	93d6948c9c496       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   f7a9ea66aab0b
	f7dcf25a62514       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   70044590c4c53
	b6606566197a3       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   e79f825231149
	fb877ae4dac25       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   ca3dbd6e8c99d
	a6a7fdc4f7300       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   3a2bc012ba6a7
	8ac3f526bf4fd       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   58bd6152d6137
	7667f4a88453a       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   b9af40698474e
	
	* 
	* ==> coredns [f7dcf25a6251] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220725164719-14919
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220725164719-14919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b
	                    minikube.k8s.io/name=no-preload-20220725164719-14919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T16_53_20_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 23:53:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220725164719-14919
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 23:54:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 23:54:35 +0000   Mon, 25 Jul 2022 23:53:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 23:54:35 +0000   Mon, 25 Jul 2022 23:53:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 23:54:35 +0000   Mon, 25 Jul 2022 23:53:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 23:54:35 +0000   Mon, 25 Jul 2022 23:53:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-20220725164719-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                82882aca-6043-459a-8f9a-a031699e1ba4
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-pk97r                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     69s
	  kube-system                 etcd-no-preload-20220725164719-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-no-preload-20220725164719-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-no-preload-20220725164719-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-r8xpz                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-no-preload-20220725164719-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 metrics-server-5c6f97fb75-p6xmp                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-7dmwv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-9c5cf                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  NodeHasSufficientMemory  88s (x5 over 89s)  kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x4 over 89s)  kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x4 over 89s)  kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientPID
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  82s                kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s                kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s                kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientPID
	  Normal  NodeReady                82s                kubelet          Node no-preload-20220725164719-14919 status is now: NodeReady
	  Normal  RegisteredNode           70s                node-controller  Node no-preload-20220725164719-14919 event: Registered Node no-preload-20220725164719-14919 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node no-preload-20220725164719-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [8ac3f526bf4f] <==
	* {"level":"info","ts":"2022-07-25T23:53:14.825Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T23:53:14.829Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T23:53:15.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T23:53:15.027Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-20220725164719-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T23:53:15.027Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:53:15.027Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T23:53:15.027Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:53:15.028Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:53:15.028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:53:15.028Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T23:53:15.028Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T23:53:15.029Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T23:53:15.029Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T23:53:15.029Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T23:53:20.260Z","caller":"traceutil/trace.go:171","msg":"trace[1187369827] transaction","detail":"{read_only:false; response_revision:229; number_of_response:1; }","duration":"107.802684ms","start":"2022-07-25T23:53:20.152Z","end":"2022-07-25T23:53:20.260Z","steps":["trace[1187369827] 'process raft request'  (duration: 33.093498ms)","trace[1187369827] 'compare'  (duration: 74.367545ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  23:54:42 up  1:01,  0 users,  load average: 0.65, 1.05, 1.25
	Linux no-preload-20220725164719-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [7667f4a88453] <==
	* I0725 23:53:19.876472       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 23:53:20.416403       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 23:53:20.421540       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 23:53:20.429665       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 23:53:20.503648       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 23:53:33.532114       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 23:53:33.582788       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 23:53:34.372721       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 23:53:35.544816       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.105.20.57]
	W0725 23:53:36.424216       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 23:53:36.424293       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 23:53:36.424307       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 23:53:36.424330       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 23:53:36.424540       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 23:53:36.425542       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 23:53:36.842060       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.107.143.128]
	I0725 23:53:36.860677       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.165.83]
	W0725 23:54:36.382644       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 23:54:36.382688       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 23:54:36.382694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 23:54:36.383731       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 23:54:36.383941       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 23:54:36.383985       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [a6a7fdc4f730] <==
	* I0725 23:53:33.734019       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zc96c"
	I0725 23:53:33.739512       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pk97r"
	I0725 23:53:33.760180       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-zc96c"
	I0725 23:53:35.360047       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 23:53:35.363914       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 23:53:35.426011       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 23:53:35.431240       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-p6xmp"
	I0725 23:53:36.695737       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 23:53:36.701666       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 23:53:36.703776       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0725 23:53:36.725053       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.725464       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 23:53:36.730312       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 23:53:36.730694       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.730807       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 23:53:36.736159       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.736485       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 23:53:36.739209       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.739445       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 23:53:36.745158       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 23:53:36.745192       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 23:53:36.752244       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-9c5cf"
	I0725 23:53:36.828478       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-7dmwv"
	E0725 23:54:35.106677       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0725 23:54:35.123952       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b6606566197a] <==
	* I0725 23:53:34.270515       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 23:53:34.270601       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 23:53:34.270683       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 23:53:34.366134       1 server_others.go:206] "Using iptables Proxier"
	I0725 23:53:34.366248       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 23:53:34.366297       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 23:53:34.366313       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 23:53:34.366343       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 23:53:34.366811       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 23:53:34.367219       1 server.go:661] "Version info" version="v1.24.3"
	I0725 23:53:34.367279       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 23:53:34.368929       1 config.go:317] "Starting service config controller"
	I0725 23:53:34.368980       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 23:53:34.369003       1 config.go:226] "Starting endpoint slice config controller"
	I0725 23:53:34.369008       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 23:53:34.370073       1 config.go:444] "Starting node config controller"
	I0725 23:53:34.370110       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 23:53:34.470686       1 shared_informer.go:262] Caches are synced for node config
	I0725 23:53:34.470745       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 23:53:34.470754       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fb877ae4dac2] <==
	* W0725 23:53:17.824747       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:17.824778       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:17.824671       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 23:53:17.824914       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 23:53:17.825194       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 23:53:17.825225       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 23:53:17.825324       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 23:53:17.825360       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 23:53:17.825531       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 23:53:17.825584       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 23:53:17.825624       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:17.825794       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:17.828310       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:17.828347       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:17.831705       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:17.831793       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:18.711755       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 23:53:18.711796       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 23:53:18.788102       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 23:53:18.788142       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 23:53:18.823433       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 23:53:18.823473       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 23:53:18.827332       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 23:53:18.827369       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0725 23:53:19.179220       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:48:41 UTC, end at Mon 2022-07-25 23:54:43 UTC. --
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538794    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e4b5868d-0220-4d63-8b47-1ed865b090cc-tmp-dir\") pod \"metrics-server-5c6f97fb75-p6xmp\" (UID: \"e4b5868d-0220-4d63-8b47-1ed865b090cc\") " pod="kube-system/metrics-server-5c6f97fb75-p6xmp"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538810    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c8dd765-07cc-442c-a097-c898019f7c02-config-volume\") pod \"coredns-6d4b75cb6d-pk97r\" (UID: \"5c8dd765-07cc-442c-a097-c898019f7c02\") " pod="kube-system/coredns-6d4b75cb6d-pk97r"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538830    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7npkz\" (UniqueName: \"kubernetes.io/projected/acdd6709-c55c-4389-9025-5a4541349682-kube-api-access-7npkz\") pod \"dashboard-metrics-scraper-dffd48c4c-7dmwv\" (UID: \"acdd6709-c55c-4389-9025-5a4541349682\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-7dmwv"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538847    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gqpr\" (UniqueName: \"kubernetes.io/projected/9d89a226-d4b6-4543-9b95-c04b32e36bb3-kube-api-access-4gqpr\") pod \"kube-proxy-r8xpz\" (UID: \"9d89a226-d4b6-4543-9b95-c04b32e36bb3\") " pod="kube-system/kube-proxy-r8xpz"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538909    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/acdd6709-c55c-4389-9025-5a4541349682-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-7dmwv\" (UID: \"acdd6709-c55c-4389-9025-5a4541349682\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-7dmwv"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538939    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d89a226-d4b6-4543-9b95-c04b32e36bb3-lib-modules\") pod \"kube-proxy-r8xpz\" (UID: \"9d89a226-d4b6-4543-9b95-c04b32e36bb3\") " pod="kube-system/kube-proxy-r8xpz"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.538967    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbs84\" (UniqueName: \"kubernetes.io/projected/96b01ade-dad8-4551-a42e-ec5920059ae9-kube-api-access-zbs84\") pod \"storage-provisioner\" (UID: \"96b01ade-dad8-4551-a42e-ec5920059ae9\") " pod="kube-system/storage-provisioner"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539009    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f622\" (UniqueName: \"kubernetes.io/projected/e4b5868d-0220-4d63-8b47-1ed865b090cc-kube-api-access-6f622\") pod \"metrics-server-5c6f97fb75-p6xmp\" (UID: \"e4b5868d-0220-4d63-8b47-1ed865b090cc\") " pod="kube-system/metrics-server-5c6f97fb75-p6xmp"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539029    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7rw\" (UniqueName: \"kubernetes.io/projected/5c8dd765-07cc-442c-a097-c898019f7c02-kube-api-access-nc7rw\") pod \"coredns-6d4b75cb6d-pk97r\" (UID: \"5c8dd765-07cc-442c-a097-c898019f7c02\") " pod="kube-system/coredns-6d4b75cb6d-pk97r"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539063    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/96b01ade-dad8-4551-a42e-ec5920059ae9-tmp\") pod \"storage-provisioner\" (UID: \"96b01ade-dad8-4551-a42e-ec5920059ae9\") " pod="kube-system/storage-provisioner"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539166    9913 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fb1410cc-4f8d-414e-abf8-64f2efff1852-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-9c5cf\" (UID: \"fb1410cc-4f8d-414e-abf8-64f2efff1852\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-9c5cf"
	Jul 25 23:54:36 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:36.539251    9913 reconciler.go:157] "Reconciler: start to sync state"
	Jul 25 23:54:37 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:37.690683    9913 request.go:601] Waited for 1.088878238s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 25 23:54:37 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:37.719258    9913 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220725164719-14919\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220725164719-14919"
	Jul 25 23:54:37 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:37.950548    9913 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220725164719-14919\" already exists" pod="kube-system/kube-apiserver-no-preload-20220725164719-14919"
	Jul 25 23:54:38 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:38.094978    9913 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220725164719-14919\" already exists" pod="kube-system/etcd-no-preload-20220725164719-14919"
	Jul 25 23:54:38 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:38.356192    9913 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220725164719-14919\" already exists" pod="kube-system/kube-scheduler-no-preload-20220725164719-14919"
	Jul 25 23:54:39 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:39.281745    9913 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 23:54:39 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:39.281816    9913 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 23:54:39 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:39.282031    9913 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6f622,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:
[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-p6xmp_kube-system(e4b5868d-0220-4d63-8b47-1ed865b090cc): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 25 23:54:39 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:39.282069    9913 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-p6xmp" podUID=e4b5868d-0220-4d63-8b47-1ed865b090cc
	Jul 25 23:54:39 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:39.793459    9913 scope.go:110] "RemoveContainer" containerID="8fb3d6ea2c67df0206acc1e6d0beac72517f3ba4765f4954b0a044d792ebd6fa"
	Jul 25 23:54:40 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:40.628175    9913 scope.go:110] "RemoveContainer" containerID="8fb3d6ea2c67df0206acc1e6d0beac72517f3ba4765f4954b0a044d792ebd6fa"
	Jul 25 23:54:40 no-preload-20220725164719-14919 kubelet[9913]: I0725 23:54:40.628448    9913 scope.go:110] "RemoveContainer" containerID="b1458e1bcf52755950ace3d26eda5dcfd9635e4cfa9c44a1cce7182c89221c19"
	Jul 25 23:54:40 no-preload-20220725164719-14919 kubelet[9913]: E0725 23:54:40.628643    9913 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-7dmwv_kubernetes-dashboard(acdd6709-c55c-4389-9025-5a4541349682)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-7dmwv" podUID=acdd6709-c55c-4389-9025-5a4541349682
	
	* 
	* ==> kubernetes-dashboard [1187dc7cc8b1] <==
	* 2022/07/25 23:53:47 Using namespace: kubernetes-dashboard
	2022/07/25 23:53:47 Using in-cluster config to connect to apiserver
	2022/07/25 23:53:47 Using secret token for csrf signing
	2022/07/25 23:53:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 23:53:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 23:53:47 Successful initial request to the apiserver, version: v1.24.3
	2022/07/25 23:53:47 Generating JWE encryption key
	2022/07/25 23:53:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 23:53:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 23:53:47 Initializing JWE encryption key from synchronized object
	2022/07/25 23:53:47 Creating in-cluster Sidecar client
	2022/07/25 23:53:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 23:53:47 Serving insecurely on HTTP port: 9090
	2022/07/25 23:54:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 23:53:47 Starting overwatch
	
	* 
	* ==> storage-provisioner [93d6948c9c49] <==
	* I0725 23:53:36.423963       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 23:53:36.448276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 23:53:36.448305       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 23:53:36.462121       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 23:53:36.463029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7071249-ea7e-4a53-9ca0-ca8a680bc065", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220725164719-14919_0d088afd-fb98-473e-9519-720be122e2d4 became leader
	I0725 23:53:36.463192       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220725164719-14919_0d088afd-fb98-473e-9519-720be122e2d4!
	I0725 23:53:36.564081       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220725164719-14919_0d088afd-fb98-473e-9519-720be122e2d4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220725164719-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-p6xmp
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220725164719-14919 describe pod metrics-server-5c6f97fb75-p6xmp
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220725164719-14919 describe pod metrics-server-5c6f97fb75-p6xmp: exit status 1 (290.430139ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-p6xmp" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220725164719-14919 describe pod metrics-server-5c6f97fb75-p6xmp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (43.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:00:18.967251   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:00:55.137327   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 17:00:57.232938   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:00:59.457538   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 17:01:01.126114   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:01:10.676938   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:01:43.993190   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:02:22.510017   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:03:17.279595   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:03:30.445632   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:03:41.246308   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:03:44.968671   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:04:45.834386   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:04:53.088709   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 17:04:53.511145   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:05:04.308937   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:05:18.996338   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:05:55.167361   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:05:57.262860   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 17:05:59.487492   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:06:08.914952   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 17:06:10.707602   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 17:06:16.160568   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:06:44.022358   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:06:56.000275   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:07:18.220071   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 17:07:20.318383   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:07:33.771978   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:08:17.307573   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:08:22.051936   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:08:30.475591   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 2 (475.635373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-20220725164610-14919" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725164610-14919
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725164610-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf",
	        "Created": "2022-07-25T23:46:16.38043483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:51:54.648798687Z",
	            "FinishedAt": "2022-07-25T23:51:51.718201115Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hosts",
	        "LogPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf-json.log",
	        "Name": "/old-k8s-version-20220725164610-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725164610-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725164610-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725164610-14919",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725164610-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725164610-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1e8c374f85bd4349655b5dfcfe823620a484a31bb6415a2e0b8632dd020452f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50823"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50824"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50825"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50826"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50822"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1e8c374f85b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725164610-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3e45dea9c014",
	                        "old-k8s-version-20220725164610-14919"
	                    ],
	                    "NetworkID": "cc2155f0f89448c4255b6f474f0b34c64b5460d3acc5441984909bacee63d7d6",
	                    "EndpointID": "aa5034ea8648431be616c4e8025677bb27e250d86bdb70415b75ae2f6083245f",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 2 (478.438175ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220725164610-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220725164610-14919 logs -n 25: (3.590205096s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p                                                         | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                            |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220725170207-14919      | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | disable-driver-mounts-20220725170207-14919                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725170926-14919 --memory=2200           | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT |                     |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 17:09:26
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:09:26.416401   32914 out.go:296] Setting OutFile to fd 1 ...
	I0725 17:09:26.416544   32914 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:09:26.416549   32914 out.go:309] Setting ErrFile to fd 2...
	I0725 17:09:26.416553   32914 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:09:26.416649   32914 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 17:09:26.417208   32914 out.go:303] Setting JSON to false
	I0725 17:09:26.432651   32914 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11089,"bootTime":1658783077,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 17:09:26.432763   32914 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 17:09:26.454377   32914 out.go:177] * [newest-cni-20220725170926-14919] minikube v1.26.0 on Darwin 12.5
	I0725 17:09:26.475968   32914 notify.go:193] Checking for updates...
	I0725 17:09:26.497203   32914 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 17:09:26.519313   32914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:09:26.541168   32914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 17:09:26.562074   32914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:09:26.583955   32914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 17:09:26.605466   32914 config.go:178] Loaded profile config "old-k8s-version-20220725164610-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 17:09:26.605504   32914 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 17:09:26.675972   32914 docker.go:137] docker version: linux-20.10.17
	I0725 17:09:26.676113   32914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:09:26.811232   32914 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:09:26.748789324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:09:26.854209   32914 out.go:177] * Using the docker driver based on user configuration
	I0725 17:09:26.875210   32914 start.go:284] selected driver: docker
	I0725 17:09:26.875236   32914 start.go:808] validating driver "docker" against <nil>
	I0725 17:09:26.875250   32914 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:09:26.877365   32914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:09:27.011130   32914 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:09:26.949978223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:09:27.011238   32914 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	W0725 17:09:27.011263   32914 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0725 17:09:27.011489   32914 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 17:09:27.033183   32914 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 17:09:27.054472   32914 cni.go:95] Creating CNI manager for ""
	I0725 17:09:27.054491   32914 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:09:27.054501   32914 start_flags.go:310] config:
	{Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:09:27.075727   32914 out.go:177] * Starting control plane node newest-cni-20220725170926-14919 in cluster newest-cni-20220725170926-14919
	I0725 17:09:27.117762   32914 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 17:09:27.138943   32914 out.go:177] * Pulling base image ...
	I0725 17:09:27.180779   32914 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:09:27.180784   32914 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 17:09:27.180848   32914 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 17:09:27.180866   32914 cache.go:57] Caching tarball of preloaded images
	I0725 17:09:27.181056   32914 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 17:09:27.181087   32914 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 17:09:27.182110   32914 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/config.json ...
	I0725 17:09:27.182235   32914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/config.json: {Name:mk26c2f6c95ebc648ae8523d6f6bda7d9337fbea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:09:27.245816   32914 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 17:09:27.245837   32914 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 17:09:27.245850   32914 cache.go:208] Successfully downloaded all kic artifacts
	I0725 17:09:27.245898   32914 start.go:370] acquiring machines lock for newest-cni-20220725170926-14919: {Name:mk0f9a30538ef211b73bc7dbc2b91673075b0931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:09:27.246044   32914 start.go:374] acquired machines lock for "newest-cni-20220725170926-14919" in 134.524µs
	I0725 17:09:27.246075   32914 start.go:92] Provisioning new machine with config: &{Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 17:09:27.246135   32914 start.go:132] createHost starting for "" (driver="docker")
	I0725 17:09:27.268888   32914 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0725 17:09:27.269264   32914 start.go:166] libmachine.API.Create for "newest-cni-20220725170926-14919" (driver="docker")
	I0725 17:09:27.269316   32914 client.go:168] LocalClient.Create starting
	I0725 17:09:27.269508   32914 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem
	I0725 17:09:27.269582   32914 main.go:134] libmachine: Decoding PEM data...
	I0725 17:09:27.269615   32914 main.go:134] libmachine: Parsing certificate...
	I0725 17:09:27.269710   32914 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem
	I0725 17:09:27.269777   32914 main.go:134] libmachine: Decoding PEM data...
	I0725 17:09:27.269796   32914 main.go:134] libmachine: Parsing certificate...
	I0725 17:09:27.290856   32914 cli_runner.go:164] Run: docker network inspect newest-cni-20220725170926-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 17:09:27.358275   32914 cli_runner.go:211] docker network inspect newest-cni-20220725170926-14919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 17:09:27.358393   32914 network_create.go:272] running [docker network inspect newest-cni-20220725170926-14919] to gather additional debugging logs...
	I0725 17:09:27.358414   32914 cli_runner.go:164] Run: docker network inspect newest-cni-20220725170926-14919
	W0725 17:09:27.424751   32914 cli_runner.go:211] docker network inspect newest-cni-20220725170926-14919 returned with exit code 1
	I0725 17:09:27.424775   32914 network_create.go:275] error running [docker network inspect newest-cni-20220725170926-14919]: docker network inspect newest-cni-20220725170926-14919: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220725170926-14919
	I0725 17:09:27.424821   32914 network_create.go:277] output of [docker network inspect newest-cni-20220725170926-14919]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220725170926-14919
	
	** /stderr **
	I0725 17:09:27.424907   32914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 17:09:27.491881   32914 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000836050] misses:0}
	I0725 17:09:27.491930   32914 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 17:09:27.491947   32914 network_create.go:115] attempt to create docker network newest-cni-20220725170926-14919 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 17:09:27.492057   32914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 newest-cni-20220725170926-14919
	W0725 17:09:27.556827   32914 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 newest-cni-20220725170926-14919 returned with exit code 1
	W0725 17:09:27.556890   32914 network_create.go:107] failed to create docker network newest-cni-20220725170926-14919 192.168.49.0/24, will retry: subnet is taken
	I0725 17:09:27.557146   32914 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000836050] amended:false}} dirty:map[] misses:0}
	I0725 17:09:27.557163   32914 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 17:09:27.557435   32914 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000836050] amended:true}} dirty:map[192.168.49.0:0xc000836050 192.168.58.0:0xc0003b6038] misses:0}
	I0725 17:09:27.557452   32914 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 17:09:27.557458   32914 network_create.go:115] attempt to create docker network newest-cni-20220725170926-14919 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0725 17:09:27.557525   32914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 newest-cni-20220725170926-14919
	W0725 17:09:27.623067   32914 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 newest-cni-20220725170926-14919 returned with exit code 1
	W0725 17:09:27.623116   32914 network_create.go:107] failed to create docker network newest-cni-20220725170926-14919 192.168.58.0/24, will retry: subnet is taken
	I0725 17:09:27.623391   32914 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000836050] amended:true}} dirty:map[192.168.49.0:0xc000836050 192.168.58.0:0xc0003b6038] misses:1}
	I0725 17:09:27.623424   32914 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 17:09:27.623629   32914 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000836050] amended:true}} dirty:map[192.168.49.0:0xc000836050 192.168.58.0:0xc0003b6038 192.168.67.0:0xc000c329f0] misses:1}
	I0725 17:09:27.623644   32914 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 17:09:27.623653   32914 network_create.go:115] attempt to create docker network newest-cni-20220725170926-14919 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0725 17:09:27.623712   32914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 newest-cni-20220725170926-14919
	W0725 17:09:27.687379   32914 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 newest-cni-20220725170926-14919 returned with exit code 1
	W0725 17:09:27.687417   32914 network_create.go:107] failed to create docker network newest-cni-20220725170926-14919 192.168.67.0/24, will retry: subnet is taken
	I0725 17:09:27.687687   32914 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000836050] amended:true}} dirty:map[192.168.49.0:0xc000836050 192.168.58.0:0xc0003b6038 192.168.67.0:0xc000c329f0] misses:2}
	I0725 17:09:27.687705   32914 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 17:09:27.687907   32914 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000836050] amended:true}} dirty:map[192.168.49.0:0xc000836050 192.168.58.0:0xc0003b6038 192.168.67.0:0xc000c329f0 192.168.76.0:0xc0003b6208] misses:2}
	I0725 17:09:27.687929   32914 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 17:09:27.687936   32914 network_create.go:115] attempt to create docker network newest-cni-20220725170926-14919 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0725 17:09:27.687993   32914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 newest-cni-20220725170926-14919
	I0725 17:09:27.783112   32914 network_create.go:99] docker network newest-cni-20220725170926-14919 192.168.76.0/24 created
	I0725 17:09:27.783152   32914 kic.go:106] calculated static IP "192.168.76.2" for the "newest-cni-20220725170926-14919" container
	I0725 17:09:27.783271   32914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 17:09:27.853084   32914 cli_runner.go:164] Run: docker volume create newest-cni-20220725170926-14919 --label name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 --label created_by.minikube.sigs.k8s.io=true
	I0725 17:09:27.918702   32914 oci.go:103] Successfully created a docker volume newest-cni-20220725170926-14919
	I0725 17:09:27.918835   32914 cli_runner.go:164] Run: docker run --rm --name newest-cni-20220725170926-14919-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 --entrypoint /usr/bin/test -v newest-cni-20220725170926-14919:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 17:09:28.402343   32914 oci.go:107] Successfully prepared a docker volume newest-cni-20220725170926-14919
	I0725 17:09:28.402398   32914 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:09:28.402432   32914 kic.go:179] Starting extracting preloaded images to volume ...
	I0725 17:09:28.402531   32914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220725170926-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 17:09:33.172378   32914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220725170926-14919:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (4.769753908s)
	I0725 17:09:33.172416   32914 kic.go:188] duration metric: took 4.769935 seconds to extract preloaded images to volume
	I0725 17:09:33.172527   32914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 17:09:33.309733   32914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220725170926-14919 --name newest-cni-20220725170926-14919 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220725170926-14919 --network newest-cni-20220725170926-14919 --ip 192.168.76.2 --volume newest-cni-20220725170926-14919:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 17:09:33.687789   32914 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Running}}
	I0725 17:09:33.763321   32914 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:09:33.844465   32914 cli_runner.go:164] Run: docker exec newest-cni-20220725170926-14919 stat /var/lib/dpkg/alternatives/iptables
	I0725 17:09:33.980222   32914 oci.go:144] the created container "newest-cni-20220725170926-14919" has a running status.
	I0725 17:09:33.980250   32914 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa...
	I0725 17:09:34.106159   32914 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 17:09:34.223614   32914 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:09:34.295785   32914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 17:09:34.295804   32914 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220725170926-14919 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 17:09:34.420430   32914 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:09:34.493100   32914 machine.go:88] provisioning docker machine ...
	I0725 17:09:34.493155   32914 ubuntu.go:169] provisioning hostname "newest-cni-20220725170926-14919"
	I0725 17:09:34.493238   32914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:09:34.564810   32914 main.go:134] libmachine: Using SSH client type: native
	I0725 17:09:34.565016   32914 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52743 <nil> <nil>}
	I0725 17:09:34.565034   32914 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220725170926-14919 && echo "newest-cni-20220725170926-14919" | sudo tee /etc/hostname
	I0725 17:09:34.695808   32914 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220725170926-14919
	
	I0725 17:09:34.695897   32914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:09:34.769887   32914 main.go:134] libmachine: Using SSH client type: native
	I0725 17:09:34.770052   32914 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52743 <nil> <nil>}
	I0725 17:09:34.770069   32914 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220725170926-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220725170926-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220725170926-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:09:34.892780   32914 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:09:34.892800   32914 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 17:09:34.892825   32914 ubuntu.go:177] setting up certificates
	I0725 17:09:34.892830   32914 provision.go:83] configureAuth start
	I0725 17:09:34.892893   32914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:09:34.966349   32914 provision.go:138] copyHostCerts
	I0725 17:09:34.966440   32914 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 17:09:34.966450   32914 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 17:09:34.966542   32914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 17:09:34.966740   32914 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 17:09:34.966749   32914 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 17:09:34.966815   32914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 17:09:34.966960   32914 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 17:09:34.966966   32914 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 17:09:34.967026   32914 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 17:09:34.967159   32914 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220725170926-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220725170926-14919]
	I0725 17:09:35.096425   32914 provision.go:172] copyRemoteCerts
	I0725 17:09:35.096476   32914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:09:35.096523   32914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:09:35.170605   32914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52743 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:09:35.258662   32914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 17:09:35.276454   32914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0725 17:09:35.292735   32914 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 17:09:35.310258   32914 provision.go:86] duration metric: configureAuth took 417.394107ms
	I0725 17:09:35.310270   32914 ubuntu.go:193] setting minikube options for container-runtime
	I0725 17:09:35.310508   32914 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:09:35.310606   32914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:09:35.384424   32914 main.go:134] libmachine: Using SSH client type: native
	I0725 17:09:35.384574   32914 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52743 <nil> <nil>}
	I0725 17:09:35.384602   32914 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 17:09:35.509544   32914 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 17:09:35.509562   32914 ubuntu.go:71] root file system type: overlay
	I0725 17:09:35.509728   32914 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 17:09:35.509816   32914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:09:35.590292   32914 main.go:134] libmachine: Using SSH client type: native
	I0725 17:09:35.590458   32914 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52743 <nil> <nil>}
	I0725 17:09:35.590515   32914 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 17:09:35.727080   32914 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 17:09:35.727182   32914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:09:35.805840   32914 main.go:134] libmachine: Using SSH client type: native
	I0725 17:09:35.805997   32914 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52743 <nil> <nil>}
	I0725 17:09:35.806011   32914 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:51:54 UTC, end at Tue 2022-07-26 00:09:37 UTC. --
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.280625561Z" level=info msg="Processing signal 'terminated'"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.281621938Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.282179113Z" level=info msg="Daemon shutdown complete"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: docker.service: Succeeded.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.333388918Z" level=info msg="Starting up"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335280455Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335321821Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335353731Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335365331Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336739849Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336771694Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336792129Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336802010Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.340124810Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.344053927Z" level=info msg="Loading containers: start."
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.416564242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.446250062Z" level=info msg="Loading containers: done."
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.454564731Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.454620735Z" level=info msg="Daemon has completed initialization"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Started Docker Application Container Engine.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.478491259Z" level=info msg="API listen on [::]:2376"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.481408702Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-07-26T00:09:39Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:09:39 up  1:16,  0 users,  load average: 1.41, 0.96, 1.00
	Linux old-k8s-version-20220725164610-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:51:54 UTC, end at Tue 2022-07-26 00:09:39 UTC. --
	Jul 26 00:09:37 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 kubelet[24560]: I0726 00:09:38.614149   24560 server.go:410] Version: v1.16.0
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 kubelet[24560]: I0726 00:09:38.614431   24560 plugins.go:100] No cloud provider specified.
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 kubelet[24560]: I0726 00:09:38.614445   24560 server.go:773] Client rotation is on, will bootstrap in background
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 kubelet[24560]: I0726 00:09:38.617808   24560 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 kubelet[24560]: W0726 00:09:38.619038   24560 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 kubelet[24560]: W0726 00:09:38.619183   24560 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 kubelet[24560]: F0726 00:09:38.619209   24560 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 26 00:09:38 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 930.
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 kubelet[24575]: I0726 00:09:39.367805   24575 server.go:410] Version: v1.16.0
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 kubelet[24575]: I0726 00:09:39.368173   24575 plugins.go:100] No cloud provider specified.
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 kubelet[24575]: I0726 00:09:39.368234   24575 server.go:773] Client rotation is on, will bootstrap in background
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 kubelet[24575]: I0726 00:09:39.370062   24575 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 kubelet[24575]: W0726 00:09:39.370832   24575 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 kubelet[24575]: W0726 00:09:39.370929   24575 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 kubelet[24575]: F0726 00:09:39.371008   24575 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 26 00:09:39 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 17:09:39.719377   33032 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 2 (492.442785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220725164610-14919" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (43.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220725165448-14919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919: exit status 2 (16.127334015s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919: exit status 2 (16.189266545s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220725165448-14919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220725165448-14919
helpers_test.go:235: (dbg) docker inspect embed-certs-20220725165448-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32",
	        "Created": "2022-07-25T23:54:55.830914982Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 264743,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:56:04.753937421Z",
	            "FinishedAt": "2022-07-25T23:56:02.715447744Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32/hosts",
	        "LogPath": "/var/lib/docker/containers/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32-json.log",
	        "Name": "/embed-certs-20220725165448-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220725165448-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220725165448-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b47ddad077f42705cc10c763d70c555f888ae17e29bbf8a52530a710f53399d4-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b47ddad077f42705cc10c763d70c555f888ae17e29bbf8a52530a710f53399d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b47ddad077f42705cc10c763d70c555f888ae17e29bbf8a52530a710f53399d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b47ddad077f42705cc10c763d70c555f888ae17e29bbf8a52530a710f53399d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220725165448-14919",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220725165448-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220725165448-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220725165448-14919",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220725165448-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34897cc780983bc169e42596f514bea27e30f21721039af68db144c6f6f3aa9b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51310"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51311"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51312"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51313"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51314"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/34897cc78098",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220725165448-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9b6e28a028ba",
	                        "embed-certs-20220725165448-14919"
	                    ],
	                    "NetworkID": "ff1a660fe92dd6c2e75d32c3e09ef643890082fb32ed982be41e16b8bb608895",
	                    "EndpointID": "5f5f62b7a742d86b0fff95ab4ead516395300b7daa36b24d2407ea46a25970da",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220725165448-14919 logs -n 25
E0725 17:01:55.969581   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220725165448-14919 logs -n 25: (2.822041925s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT | 25 Jul 22 16:46 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:47 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:53 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:50 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 16:56:03
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 16:56:03.433534   31337 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:56:03.433731   31337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:56:03.433737   31337 out.go:309] Setting ErrFile to fd 2...
	I0725 16:56:03.433741   31337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:56:03.433881   31337 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:56:03.434424   31337 out.go:303] Setting JSON to false
	I0725 16:56:03.449478   31337 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10286,"bootTime":1658783077,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:56:03.449569   31337 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:56:03.471556   31337 out.go:177] * [embed-certs-20220725165448-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:56:03.515487   31337 notify.go:193] Checking for updates...
	I0725 16:56:03.537285   31337 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:56:03.559095   31337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:56:03.580425   31337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:56:03.602303   31337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:56:03.625261   31337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:56:03.646919   31337 config.go:178] Loaded profile config "embed-certs-20220725165448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:56:03.647548   31337 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:56:03.716719   31337 docker.go:137] docker version: linux-20.10.17
	I0725 16:56:03.716857   31337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:56:03.850783   31337 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:56:03.793505502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:56:03.871916   31337 out.go:177] * Using the docker driver based on existing profile
	I0725 16:56:03.893953   31337 start.go:284] selected driver: docker
	I0725 16:56:03.893988   31337 start.go:808] validating driver "docker" against &{Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:03.894188   31337 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:56:03.897532   31337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:56:04.045703   31337 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:56:03.982785914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:56:04.045859   31337 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:56:04.045875   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:04.045886   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:04.045899   31337 start_flags.go:310] config:
	{Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:04.088356   31337 out.go:177] * Starting control plane node embed-certs-20220725165448-14919 in cluster embed-certs-20220725165448-14919
	I0725 16:56:04.109451   31337 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:56:04.130134   31337 out.go:177] * Pulling base image ...
	I0725 16:56:04.172375   31337 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:56:04.172376   31337 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:56:04.172427   31337 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 16:56:04.172439   31337 cache.go:57] Caching tarball of preloaded images
	I0725 16:56:04.172566   31337 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:56:04.172579   31337 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 16:56:04.173197   31337 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/config.json ...
	I0725 16:56:04.236416   31337 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:56:04.236434   31337 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:56:04.236446   31337 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:56:04.236526   31337 start.go:370] acquiring machines lock for embed-certs-20220725165448-14919: {Name:mkbc95d1eab1ca3410e49bf2a4e793a24fb963ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:56:04.236618   31337 start.go:374] acquired machines lock for "embed-certs-20220725165448-14919" in 73.505µs
	I0725 16:56:04.236655   31337 start.go:95] Skipping create...Using existing machine configuration
	I0725 16:56:04.236666   31337 fix.go:55] fixHost starting: 
	I0725 16:56:04.236886   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 16:56:04.304136   31337 fix.go:103] recreateIfNeeded on embed-certs-20220725165448-14919: state=Stopped err=<nil>
	W0725 16:56:04.304166   31337 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 16:56:04.346631   31337 out.go:177] * Restarting existing docker container for "embed-certs-20220725165448-14919" ...
	I0725 16:56:03.930815   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:03.940063   30645 kubeadm.go:630] restartCluster took 4m5.611815756s
	W0725 16:56:03.940157   30645 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0725 16:56:03.940174   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:56:04.371868   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:56:04.382270   30645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:04.391315   30645 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:56:04.391409   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:56:04.400006   30645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:56:04.400035   30645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:56:05.304425   30645 out.go:204]   - Generating certificates and keys ...
	I0725 16:56:04.367742   31337 cli_runner.go:164] Run: docker start embed-certs-20220725165448-14919
	I0725 16:56:04.744066   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 16:56:04.827385   31337 kic.go:415] container "embed-certs-20220725165448-14919" state is running.
	I0725 16:56:04.828035   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:04.912426   31337 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/config.json ...
	I0725 16:56:04.912942   31337 machine.go:88] provisioning docker machine ...
	I0725 16:56:04.912971   31337 ubuntu.go:169] provisioning hostname "embed-certs-20220725165448-14919"
	I0725 16:56:04.913056   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:04.999598   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:04.999819   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:04.999838   31337 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220725165448-14919 && echo "embed-certs-20220725165448-14919" | sudo tee /etc/hostname
	I0725 16:56:05.137366   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220725165448-14919
	
	I0725 16:56:05.137451   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.224934   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:05.225280   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:05.225297   31337 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220725165448-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220725165448-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220725165448-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:56:05.351826   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:56:05.351845   31337 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:56:05.351871   31337 ubuntu.go:177] setting up certificates
	I0725 16:56:05.351880   31337 provision.go:83] configureAuth start
	I0725 16:56:05.351957   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:05.433243   31337 provision.go:138] copyHostCerts
	I0725 16:56:05.433345   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:56:05.433355   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:56:05.433478   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:56:05.433791   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:56:05.433801   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:56:05.433872   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:56:05.434037   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:56:05.434043   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:56:05.434112   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:56:05.434245   31337 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220725165448-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220725165448-14919]
	I0725 16:56:05.543085   31337 provision.go:172] copyRemoteCerts
	I0725 16:56:05.543159   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:56:05.543212   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.626756   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:05.718355   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:56:05.738285   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0725 16:56:05.769330   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 16:56:05.792698   31337 provision.go:86] duration metric: configureAuth took 440.796611ms
	I0725 16:56:05.792721   31337 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:56:05.792935   31337 config.go:178] Loaded profile config "embed-certs-20220725165448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:56:05.793007   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.872213   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:05.872420   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:05.872432   31337 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:56:05.994661   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:56:05.994679   31337 ubuntu.go:71] root file system type: overlay
	I0725 16:56:05.994840   31337 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:56:05.994916   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.071541   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:06.071747   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:06.071803   31337 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:56:06.201902   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:56:06.201994   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.274921   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:06.275076   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:06.275096   31337 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:56:06.403965   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:56:06.403988   31337 machine.go:91] provisioned docker machine in 1.491027379s
	I0725 16:56:06.404000   31337 start.go:307] post-start starting for "embed-certs-20220725165448-14919" (driver="docker")
	I0725 16:56:06.404006   31337 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:56:06.404073   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:56:06.404133   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.476046   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.566386   31337 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:56:06.569878   31337 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:56:06.569892   31337 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:56:06.569898   31337 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:56:06.569903   31337 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:56:06.569913   31337 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:56:06.570034   31337 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:56:06.570192   31337 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:56:06.570362   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:56:06.577828   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:56:06.594791   31337 start.go:310] post-start completed in 190.779597ms
	I0725 16:56:06.594866   31337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:56:06.594916   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.669069   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.756422   31337 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:56:06.761025   31337 fix.go:57] fixHost completed within 2.524342859s
	I0725 16:56:06.761037   31337 start.go:82] releasing machines lock for "embed-certs-20220725165448-14919", held for 2.524394197s
	I0725 16:56:06.761113   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:06.833722   31337 ssh_runner.go:195] Run: systemctl --version
	I0725 16:56:06.833735   31337 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:56:06.833788   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.833798   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.913090   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.916204   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.999674   31337 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:56:07.221803   31337 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:56:07.221878   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:56:07.233712   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:56:07.246547   31337 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:56:07.308561   31337 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:56:07.377049   31337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:56:07.439815   31337 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:56:07.676316   31337 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 16:56:07.755611   31337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:56:07.831651   31337 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 16:56:07.841040   31337 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 16:56:07.841101   31337 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 16:56:07.846451   31337 start.go:471] Will wait 60s for crictl version
	I0725 16:56:07.846501   31337 ssh_runner.go:195] Run: sudo crictl version
	I0725 16:56:07.944939   31337 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 16:56:07.945009   31337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:56:07.979201   31337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:56:05.808767   30645 out.go:204]   - Booting up control plane ...
	I0725 16:56:08.057107   31337 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 16:56:08.057277   31337 cli_runner.go:164] Run: docker exec -t embed-certs-20220725165448-14919 dig +short host.docker.internal
	I0725 16:56:08.186719   31337 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:56:08.186830   31337 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:56:08.191311   31337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:56:08.201156   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:08.275039   31337 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:56:08.275116   31337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:56:08.304877   31337 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 16:56:08.304899   31337 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:56:08.304983   31337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:56:08.336195   31337 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 16:56:08.336253   31337 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:56:08.336397   31337 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:56:08.409222   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:08.409235   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:08.409251   31337 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:56:08.409279   31337 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220725165448-14919 NodeName:embed-certs-20220725165448-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:56:08.409450   31337 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220725165448-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:56:08.409534   31337 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220725165448-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:56:08.409594   31337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 16:56:08.417474   31337 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:56:08.417537   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:56:08.424560   31337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0725 16:56:08.437566   31337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:56:08.468744   31337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0725 16:56:08.481183   31337 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:56:08.484973   31337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:56:08.494671   31337 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919 for IP: 192.168.76.2
	I0725 16:56:08.494789   31337 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:56:08.494855   31337 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:56:08.495018   31337 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/client.key
	I0725 16:56:08.495092   31337 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.key.31bdca25
	I0725 16:56:08.495177   31337 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.key
	I0725 16:56:08.495477   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:56:08.495545   31337 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:56:08.495559   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:56:08.495593   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:56:08.495624   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:56:08.495653   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:56:08.495726   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:56:08.496246   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:56:08.513745   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 16:56:08.531066   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:56:08.548205   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:56:08.566013   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:56:08.582490   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:56:08.599475   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:56:08.616680   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:56:08.633438   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:56:08.650322   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:56:08.667527   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:56:08.684813   31337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:56:08.697928   31337 ssh_runner.go:195] Run: openssl version
	I0725 16:56:08.703211   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:56:08.710894   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.714829   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.714882   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.719947   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:56:08.728099   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:56:08.736150   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.740028   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.740070   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.745643   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:56:08.752922   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:56:08.760821   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.765131   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.765176   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.770300   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:56:08.777357   31337 kubeadm.go:395] StartCluster: {Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:08.777464   31337 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:56:08.807200   31337 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:56:08.814843   31337 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 16:56:08.814862   31337 kubeadm.go:626] restartCluster start
	I0725 16:56:08.814913   31337 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 16:56:08.821469   31337 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:08.821534   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:08.897952   31337 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220725165448-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:56:08.898156   31337 kubeconfig.go:127] "embed-certs-20220725165448-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 16:56:08.898466   31337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:56:08.899825   31337 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 16:56:08.907910   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:08.907973   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:08.916840   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.118655   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.118753   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.129281   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.319023   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.319249   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.330056   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.517396   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.517539   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.528246   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.719033   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.719162   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.729548   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.919025   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.919173   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.929719   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.119141   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.119244   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.129805   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.318229   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.318452   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.328587   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.519054   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.519263   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.530051   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.719032   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.719238   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.729880   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.919240   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.919342   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.929774   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.117018   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.117113   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.126575   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.317191   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.317355   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.328052   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.519054   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.519269   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.529681   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.718964   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.719135   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.729819   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.917205   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.917274   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.925970   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.925980   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.926026   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.934283   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.934294   31337 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 16:56:11.934304   31337 kubeadm.go:1092] stopping kube-system containers ...
	I0725 16:56:11.934365   31337 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:56:11.964872   31337 docker.go:443] Stopping containers: [9a167f413b73 c2c372481520 fa18253e55a4 b4b22c2bf1f2 bd98a2b23e46 aae50f7a8dff 751586c3bb9b 8e494f6ee1bf 7d251a39f801 c3027cf7039f ed3d81f7d6d9 225d3bf16e2b 98c148ba1de9 fead1519fc44 f1baffe473a6 4f47378a827e]
	I0725 16:56:11.964950   31337 ssh_runner.go:195] Run: docker stop 9a167f413b73 c2c372481520 fa18253e55a4 b4b22c2bf1f2 bd98a2b23e46 aae50f7a8dff 751586c3bb9b 8e494f6ee1bf 7d251a39f801 c3027cf7039f ed3d81f7d6d9 225d3bf16e2b 98c148ba1de9 fead1519fc44 f1baffe473a6 4f47378a827e
	I0725 16:56:11.994922   31337 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 16:56:12.005330   31337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:56:12.013063   31337 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 25 23:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 25 23:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 25 23:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 25 23:55 /etc/kubernetes/scheduler.conf
	
	I0725 16:56:12.013113   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 16:56:12.020769   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 16:56:12.028247   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 16:56:12.035399   31337 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:12.035447   31337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 16:56:12.042273   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 16:56:12.049752   31337 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:12.049803   31337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 16:56:12.056784   31337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:12.064194   31337 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:12.064205   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:12.110551   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:12.991729   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.176129   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.230499   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.306926   31337 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:56:13.306998   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:13.818325   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.316810   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.816722   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.832982   31337 api_server.go:71] duration metric: took 1.526047531s to wait for apiserver process to appear ...
	I0725 16:56:14.833006   31337 api_server.go:87] waiting for apiserver healthz status ...
	I0725 16:56:14.833021   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:17.439565   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 16:56:17.439586   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 16:56:17.940421   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:17.947568   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 16:56:17.947582   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:56:18.439749   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:18.460813   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 16:56:18.460830   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:56:18.939728   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:18.948093   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 200:
	ok
	I0725 16:56:18.957429   31337 api_server.go:140] control plane version: v1.24.3
	I0725 16:56:18.957444   31337 api_server.go:130] duration metric: took 4.124403291s to wait for apiserver health ...
	I0725 16:56:18.957449   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:18.957455   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:18.957467   31337 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 16:56:18.966151   31337 system_pods.go:59] 8 kube-system pods found
	I0725 16:56:18.966170   31337 system_pods.go:61] "coredns-6d4b75cb6d-brjzw" [7a073b93-7d6d-41af-bbc5-b6bb4ba61b61] Running
	I0725 16:56:18.966174   31337 system_pods.go:61] "etcd-embed-certs-20220725165448-14919" [35f46355-a412-4e3a-9e75-41fb9d357be2] Running
	I0725 16:56:18.966180   31337 system_pods.go:61] "kube-apiserver-embed-certs-20220725165448-14919" [b920b524-5ee8-47c8-ab93-078997c96a9d] Running
	I0725 16:56:18.966184   31337 system_pods.go:61] "kube-controller-manager-embed-certs-20220725165448-14919" [6bd916cf-3e22-4a72-8eea-ad9fc77fcdac] Running
	I0725 16:56:18.966190   31337 system_pods.go:61] "kube-proxy-qz466" [2436156a-42df-4487-bbf0-3723eaaefdfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 16:56:18.966197   31337 system_pods.go:61] "kube-scheduler-embed-certs-20220725165448-14919" [d4172f18-e47e-434b-aef2-c0c9dbab78d5] Running
	I0725 16:56:18.966205   31337 system_pods.go:61] "metrics-server-5c6f97fb75-dvwxz" [4be1f012-c669-4285-8fce-b98e892d097f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 16:56:18.966226   31337 system_pods.go:61] "storage-provisioner" [9a9f14a2-6357-4e11-9e55-238e2bc5349d] Running
	I0725 16:56:18.966241   31337 system_pods.go:74] duration metric: took 8.767149ms to wait for pod list to return data ...
	I0725 16:56:18.966251   31337 node_conditions.go:102] verifying NodePressure condition ...
	I0725 16:56:18.969371   31337 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 16:56:18.969384   31337 node_conditions.go:123] node cpu capacity is 6
	I0725 16:56:18.969392   31337 node_conditions.go:105] duration metric: took 3.137023ms to run NodePressure ...
	I0725 16:56:18.969403   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:19.130505   31337 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 16:56:19.134987   31337 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0725 16:56:19.418291   31337 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0725 16:56:19.990680   31337 kubeadm.go:777] kubelet initialised
	I0725 16:56:19.990692   31337 kubeadm.go:778] duration metric: took 860.168437ms waiting for restarted kubelet to initialise ...
	I0725 16:56:19.990701   31337 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:56:19.997037   31337 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:20.006432   31337 pod_ready.go:92] pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:20.006441   31337 pod_ready.go:81] duration metric: took 9.369186ms waiting for pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:20.006448   31337 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:22.022967   31337 pod_ready.go:102] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:24.520791   31337 pod_ready.go:102] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:26.521281   31337 pod_ready.go:92] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:26.521294   31337 pod_ready.go:81] duration metric: took 6.514796336s waiting for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:26.521301   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.033931   31337 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.033944   31337 pod_ready.go:81] duration metric: took 512.6349ms waiting for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.033950   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.038066   31337 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.038074   31337 pod_ready.go:81] duration metric: took 4.11923ms waiting for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.038079   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qz466" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.042382   31337 pod_ready.go:92] pod "kube-proxy-qz466" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.042391   31337 pod_ready.go:81] duration metric: took 4.306864ms waiting for pod "kube-proxy-qz466" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.042397   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:29.054332   31337 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:31.553231   31337 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:33.054275   31337 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:33.054288   31337 pod_ready.go:81] duration metric: took 6.011844144s waiting for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:33.054295   31337 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:35.064195   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:37.065735   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:39.564369   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:41.565036   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:43.566029   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:46.066803   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:48.565574   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:50.567360   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:53.064054   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:55.064766   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:57.066535   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:59.565727   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:01.567296   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:04.067915   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:06.564528   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:08.567321   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:11.064570   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:13.065974   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:15.066410   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:17.565524   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:20.064374   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:22.066550   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:24.567486   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:26.568010   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:29.064670   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:31.065977   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:33.067605   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:35.565701   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:37.566461   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:40.067424   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:42.564117   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:44.566188   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:46.567544   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:49.065322   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:51.067604   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:53.567982   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:56.064199   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:58.066495   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	W0725 16:58:00.726845   30645 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 16:58:00.726876   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:58:01.152676   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:58:01.162348   30645 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:58:01.162398   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:58:01.169739   30645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:58:01.169757   30645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:58:01.932563   30645 out.go:204]   - Generating certificates and keys ...
	I0725 16:58:02.879021   30645 out.go:204]   - Booting up control plane ...
	I0725 16:58:00.067345   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:02.565160   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:05.066397   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:07.066907   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:09.564472   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:11.565607   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:14.064290   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:16.067942   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:18.568032   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:21.065165   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:23.065894   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:25.068053   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:27.568303   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:29.569270   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:32.067312   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:34.067798   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:36.567613   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:39.065477   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:41.067979   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:43.565007   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:45.566604   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:48.064632   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:50.067874   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:52.068045   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:54.568248   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:57.065466   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:59.065588   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:01.068271   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:03.564939   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:05.567021   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:08.066080   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:10.066132   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:12.067084   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:14.068876   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:16.566420   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:19.066562   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:21.066964   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:23.565970   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:26.067272   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:28.566308   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:31.065483   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:33.566418   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:36.066933   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:38.565560   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:40.566430   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:42.569077   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:45.068908   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:47.567704   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:50.068664   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:52.069481   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:57.797952   30645 kubeadm.go:397] StartCluster complete in 7m59.508645122s
	I0725 16:59:57.798033   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:59:57.827359   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.827371   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:59:57.827433   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:59:57.857686   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.857699   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:59:57.857755   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:59:57.887067   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.887079   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:59:57.887137   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:59:57.916980   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.916992   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:59:57.917054   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:59:57.946633   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.946646   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:59:57.946705   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:59:57.976302   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.976314   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:59:57.976371   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:59:58.006163   30645 logs.go:274] 0 containers: []
	W0725 16:59:58.006175   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:59:58.006233   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:59:58.034791   30645 logs.go:274] 0 containers: []
	W0725 16:59:58.034803   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:59:58.034811   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:59:58.034818   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:59:58.075762   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:59:58.075777   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:59:58.087641   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:59:58.087653   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:59:58.142043   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:59:58.142055   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:59:58.142062   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:59:58.156155   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:59:58.156167   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:59:54.568030   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:56.569052   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:00.209432   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053238365s)
	W0725 17:00:00.209581   30645 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 17:00:00.209596   30645 out.go:239] * 
	W0725 17:00:00.209762   30645 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 17:00:00.209776   30645 out.go:239] * 
	W0725 17:00:00.210311   30645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 17:00:00.272919   30645 out.go:177] 
	W0725 17:00:00.315153   30645 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 17:00:00.315316   30645 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 17:00:00.315414   30645 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 17:00:00.372884   30645 out.go:177] 
	I0725 16:59:59.068427   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:01.567601   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:04.065736   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:06.066221   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:08.068476   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:10.068614   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:12.068934   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:14.568007   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:16.568732   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:19.068149   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:21.567711   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:24.065850   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:26.068727   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:28.568827   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:31.068963   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:33.060983   31337 pod_ready.go:81] duration metric: took 4m0.00492833s waiting for pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace to be "Ready" ...
	E0725 17:00:33.061007   31337 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 17:00:33.061024   31337 pod_ready.go:38] duration metric: took 4m13.06855299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:00:33.061067   31337 kubeadm.go:630] restartCluster took 4m24.244360087s
	W0725 17:00:33.061193   31337 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 17:00:33.061224   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 17:00:35.469010   31337 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.40775314s)
	I0725 17:00:35.469071   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:00:35.478242   31337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:00:35.486244   31337 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 17:00:35.486305   31337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:00:35.493582   31337 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 17:00:35.493607   31337 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 17:00:35.774076   31337 out.go:204]   - Generating certificates and keys ...
	I0725 17:00:36.489304   31337 out.go:204]   - Booting up control plane ...
	I0725 17:00:43.532995   31337 out.go:204]   - Configuring RBAC rules ...
	I0725 17:00:43.910442   31337 cni.go:95] Creating CNI manager for ""
	I0725 17:00:43.910470   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:00:43.910508   31337 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:00:43.910631   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b minikube.k8s.io/name=embed-certs-20220725165448-14919 minikube.k8s.io/updated_at=2022_07_25T17_00_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:43.910632   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:44.050939   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:44.115011   31337 ops.go:34] apiserver oom_adj: -16
	I0725 17:00:44.651229   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:45.151189   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:45.650666   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:46.150900   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:46.650738   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:47.150365   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:47.650430   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:48.151145   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:48.651175   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:49.151341   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:49.652492   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:50.150623   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:50.650515   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:51.151780   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:51.650676   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:52.151196   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:52.650459   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:53.150583   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:53.650428   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:54.150525   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:54.651147   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:55.152508   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:55.652544   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:56.150422   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:56.650515   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:56.724309   31337 kubeadm.go:1045] duration metric: took 12.813699078s to wait for elevateKubeSystemPrivileges.
	I0725 17:00:56.724324   31337 kubeadm.go:397] StartCluster complete in 4m47.944971599s
	I0725 17:00:56.724338   31337 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:00:56.724416   31337 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:00:56.725236   31337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:00:57.240866   31337 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220725165448-14919" rescaled to 1
	I0725 17:00:57.240941   31337 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 17:00:57.240963   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:00:57.240989   31337 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 17:00:57.241141   31337 config.go:178] Loaded profile config "embed-certs-20220725165448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:00:57.264201   31337 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220725165448-14919"
	I0725 17:00:57.264201   31337 addons.go:65] Setting dashboard=true in profile "embed-certs-20220725165448-14919"
	I0725 17:00:57.264203   31337 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220725165448-14919"
	I0725 17:00:57.264107   31337 out.go:177] * Verifying Kubernetes components...
	I0725 17:00:57.264219   31337 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220725165448-14919"
	I0725 17:00:57.264218   31337 addons.go:153] Setting addon dashboard=true in "embed-certs-20220725165448-14919"
	W0725 17:00:57.284986   31337 addons.go:162] addon dashboard should already be in state true
	I0725 17:00:57.284988   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:00:57.264220   31337 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220725165448-14919"
	I0725 17:00:57.264227   31337 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220725165448-14919"
	W0725 17:00:57.264229   31337 addons.go:162] addon storage-provisioner should already be in state true
	I0725 17:00:57.285060   31337 host.go:66] Checking if "embed-certs-20220725165448-14919" exists ...
	I0725 17:00:57.285068   31337 host.go:66] Checking if "embed-certs-20220725165448-14919" exists ...
	I0725 17:00:57.285070   31337 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220725165448-14919"
	W0725 17:00:57.285083   31337 addons.go:162] addon metrics-server should already be in state true
	I0725 17:00:57.285117   31337 host.go:66] Checking if "embed-certs-20220725165448-14919" exists ...
	I0725 17:00:57.285457   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.285592   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.285671   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.286385   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.415998   31337 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:00:57.399628   31337 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220725165448-14919"
	I0725 17:00:57.413972   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 17:00:57.413983   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	W0725 17:00:57.416042   31337 addons.go:162] addon default-storageclass should already be in state true
	I0725 17:00:57.494630   31337 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 17:00:57.458016   31337 host.go:66] Checking if "embed-certs-20220725165448-14919" exists ...
	I0725 17:00:57.458063   31337 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:00:57.495429   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.515856   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:00:57.515966   31337 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:00:57.536794   31337 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 17:00:57.536890   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:00:57.536991   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 17:00:57.537137   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 17:00:57.610879   31337 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 17:00:57.649149   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 17:00:57.649174   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 17:00:57.649300   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 17:00:57.657976   31337 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220725165448-14919" to be "Ready" ...
	I0725 17:00:57.674072   31337 node_ready.go:49] node "embed-certs-20220725165448-14919" has status "Ready":"True"
	I0725 17:00:57.674087   31337 node_ready.go:38] duration metric: took 16.048164ms waiting for node "embed-certs-20220725165448-14919" to be "Ready" ...
	I0725 17:00:57.674097   31337 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:00:57.684401   31337 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-d6xzg" in "kube-system" namespace to be "Ready" ...
	I0725 17:00:57.685652   31337 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:00:57.685687   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:00:57.685773   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 17:00:57.688461   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 17:00:57.690768   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 17:00:57.757363   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 17:00:57.783371   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 17:00:57.911602   31337 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:00:57.911614   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 17:00:57.917698   31337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:00:57.999960   31337 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:00:57.999986   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:00:58.018084   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 17:00:58.018102   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 17:00:58.022599   31337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:00:58.191357   31337 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:00:58.191380   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:00:58.200313   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 17:00:58.200332   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 17:00:58.223838   31337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:00:58.227495   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 17:00:58.227511   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 17:00:58.313210   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 17:00:58.313243   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 17:00:58.394540   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 17:00:58.394558   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 17:00:58.419464   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 17:00:58.419493   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 17:00:58.439397   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 17:00:58.457443   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 17:00:58.508592   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 17:00:58.508610   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 17:00:58.529325   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:00:58.529341   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 17:00:58.612281   31337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:00:58.617864   31337 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.201784552s)
	I0725 17:00:58.617897   31337 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 17:00:58.941047   31337 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220725165448-14919"
	I0725 17:00:59.701015   31337 pod_ready.go:92] pod "coredns-6d4b75cb6d-d6xzg" in "kube-system" namespace has status "Ready":"True"
	I0725 17:00:59.701031   31337 pod_ready.go:81] duration metric: took 2.016584043s waiting for pod "coredns-6d4b75cb6d-d6xzg" in "kube-system" namespace to be "Ready" ...
	I0725 17:00:59.701042   31337 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace to be "Ready" ...
	I0725 17:00:59.714761   31337 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.102440248s)
	I0725 17:00:59.740509   31337 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 17:00:59.782440   31337 addons.go:414] enableAddons completed in 2.541440275s
	I0725 17:01:01.714525   31337 pod_ready.go:102] pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace has status "Ready":"False"
	I0725 17:01:04.210103   31337 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-swc44" not found
	I0725 17:01:04.210118   31337 pod_ready.go:81] duration metric: took 4.509032206s waiting for pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace to be "Ready" ...
	E0725 17:01:04.210124   31337 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-swc44" not found
	I0725 17:01:04.210143   31337 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.214615   31337 pod_ready.go:92] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.214624   31337 pod_ready.go:81] duration metric: took 4.473276ms waiting for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.214630   31337 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.219336   31337 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.219346   31337 pod_ready.go:81] duration metric: took 4.71087ms waiting for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.219353   31337 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.223633   31337 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.223643   31337 pod_ready.go:81] duration metric: took 4.283359ms waiting for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.223655   31337 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btzlf" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.227898   31337 pod_ready.go:92] pod "kube-proxy-btzlf" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.227908   31337 pod_ready.go:81] duration metric: took 4.247966ms waiting for pod "kube-proxy-btzlf" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.227915   31337 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.410762   31337 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.410772   31337 pod_ready.go:81] duration metric: took 182.850933ms waiting for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.410778   31337 pod_ready.go:38] duration metric: took 6.73660784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:01:04.410794   31337 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:01:04.410850   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:01:04.422704   31337 api_server.go:71] duration metric: took 7.181690097s to wait for apiserver process to appear ...
	I0725 17:01:04.422724   31337 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:01:04.422734   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 17:01:04.429197   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 200:
	ok
	I0725 17:01:04.430620   31337 api_server.go:140] control plane version: v1.24.3
	I0725 17:01:04.430630   31337 api_server.go:130] duration metric: took 7.90082ms to wait for apiserver health ...
	I0725 17:01:04.430635   31337 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:01:04.612902   31337 system_pods.go:59] 8 kube-system pods found
	I0725 17:01:04.612916   31337 system_pods.go:61] "coredns-6d4b75cb6d-d6xzg" [b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e] Running
	I0725 17:01:04.612921   31337 system_pods.go:61] "etcd-embed-certs-20220725165448-14919" [b4a7df5c-f7c3-401a-aae5-9282b70074bb] Running
	I0725 17:01:04.612924   31337 system_pods.go:61] "kube-apiserver-embed-certs-20220725165448-14919" [227f6a1e-3c8a-46d1-9c24-3665f3597f8c] Running
	I0725 17:01:04.612936   31337 system_pods.go:61] "kube-controller-manager-embed-certs-20220725165448-14919" [03e9614c-bbb2-41ce-a7bd-6f478a7ee2a9] Running
	I0725 17:01:04.612940   31337 system_pods.go:61] "kube-proxy-btzlf" [8deb0ba6-2b1a-4818-8ebc-1c4404059440] Running
	I0725 17:01:04.612944   31337 system_pods.go:61] "kube-scheduler-embed-certs-20220725165448-14919" [d684baa2-8a97-44a7-864a-1881f3ee5af9] Running
	I0725 17:01:04.612955   31337 system_pods.go:61] "metrics-server-5c6f97fb75-h9h79" [801a7dd2-dcd6-4bca-ad12-a098f6b4630f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:01:04.612962   31337 system_pods.go:61] "storage-provisioner" [548a2d46-6808-436e-98c4-b9f0e0c17662] Running
	I0725 17:01:04.612965   31337 system_pods.go:74] duration metric: took 182.326031ms to wait for pod list to return data ...
	I0725 17:01:04.612970   31337 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:01:04.810529   31337 default_sa.go:45] found service account: "default"
	I0725 17:01:04.810540   31337 default_sa.go:55] duration metric: took 197.564551ms for default service account to be created ...
	I0725 17:01:04.810545   31337 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:01:05.013205   31337 system_pods.go:86] 8 kube-system pods found
	I0725 17:01:05.013219   31337 system_pods.go:89] "coredns-6d4b75cb6d-d6xzg" [b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e] Running
	I0725 17:01:05.013224   31337 system_pods.go:89] "etcd-embed-certs-20220725165448-14919" [b4a7df5c-f7c3-401a-aae5-9282b70074bb] Running
	I0725 17:01:05.013228   31337 system_pods.go:89] "kube-apiserver-embed-certs-20220725165448-14919" [227f6a1e-3c8a-46d1-9c24-3665f3597f8c] Running
	I0725 17:01:05.013234   31337 system_pods.go:89] "kube-controller-manager-embed-certs-20220725165448-14919" [03e9614c-bbb2-41ce-a7bd-6f478a7ee2a9] Running
	I0725 17:01:05.013237   31337 system_pods.go:89] "kube-proxy-btzlf" [8deb0ba6-2b1a-4818-8ebc-1c4404059440] Running
	I0725 17:01:05.013241   31337 system_pods.go:89] "kube-scheduler-embed-certs-20220725165448-14919" [d684baa2-8a97-44a7-864a-1881f3ee5af9] Running
	I0725 17:01:05.013263   31337 system_pods.go:89] "metrics-server-5c6f97fb75-h9h79" [801a7dd2-dcd6-4bca-ad12-a098f6b4630f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:01:05.013267   31337 system_pods.go:89] "storage-provisioner" [548a2d46-6808-436e-98c4-b9f0e0c17662] Running
	I0725 17:01:05.013271   31337 system_pods.go:126] duration metric: took 202.721689ms to wait for k8s-apps to be running ...
	I0725 17:01:05.013275   31337 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:01:05.013331   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:01:05.025516   31337 system_svc.go:56] duration metric: took 12.235142ms WaitForService to wait for kubelet.
	I0725 17:01:05.025533   31337 kubeadm.go:572] duration metric: took 7.784518192s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 17:01:05.025561   31337 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:01:05.211027   31337 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:01:05.211042   31337 node_conditions.go:123] node cpu capacity is 6
	I0725 17:01:05.211049   31337 node_conditions.go:105] duration metric: took 185.481124ms to run NodePressure ...
	I0725 17:01:05.211066   31337 start.go:216] waiting for startup goroutines ...
	I0725 17:01:05.246259   31337 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 17:01:05.271161   31337 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220725165448-14919" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:56:04 UTC, end at Tue 2022-07-26 00:01:56 UTC. --
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.162419001Z" level=info msg="ignoring event" container=a81dd714afbaaa87065408fa0727c12e92f2fcbae814e51f7357351807392281 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.231633240Z" level=info msg="ignoring event" container=00f62924e714dc747f6caaf6c8676517b5243e7782797b741f244c595222861c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.375501926Z" level=info msg="ignoring event" container=ac3bf67e7b2c681dc35c4e34d8f528f8759ed80dcc5b9cb41b05f9fdbd7481d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.455544617Z" level=info msg="ignoring event" container=757bac5b19ab51335f8d0c5509f25bd5ee1725c0f8e44ff856a6499f73b76bf4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.531690086Z" level=info msg="ignoring event" container=f51838bbe79569817cec8830f282d528905e348576fa586265213300ef1006fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.599835692Z" level=info msg="ignoring event" container=5360920e9165fb5fc1ea74da954afeb323da7f72e5b84848e646e5cb288208b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.665761682Z" level=info msg="ignoring event" container=7406ae2e4a6cc4904cb0f26bef5b440d1cc525badbc686ef11819b4421f1d2df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.746469967Z" level=info msg="ignoring event" container=bb09fee3656978a5ae31c2ed1653b76935ca1a760acd0bb2254ced3361b6315b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.812720126Z" level=info msg="ignoring event" container=1a211623a0f3a3b3c7433953256eac67f5740baec04d1ff90750a1157a654730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.935840230Z" level=info msg="ignoring event" container=d9f0ab99940b7d09e1791a342441dc5eabdbcd63dbd8ed2dfd3e544ab1c2fb75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:35 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:35.005065305Z" level=info msg="ignoring event" container=dea0951dc2f8669e95f269d52186b5cacca8c31f5e9a9c49b2fb50abbc53f332 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:35 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:35.100776984Z" level=info msg="ignoring event" container=918eacec4c03004ec202e37b369028e5b51e7e41212875ae7d1b8e3123c5ae49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:59 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:59.439108257Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:00:59 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:59.439155872Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:00:59 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:59.440466923Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:00 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:00.729531071Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 26 00:01:03 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:03.342003464Z" level=info msg="ignoring event" container=0bf7b88994572a2a72f9f4887796421435cd8f2bde611adb11b590e653d34804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:01:03 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:03.518785392Z" level=info msg="ignoring event" container=fd641746d2695ac7f53aec36776b6e9b218bf6590a3e71346e7135f37019f94c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:01:06 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:06.485766887Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 26 00:01:06 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:06.809369781Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 26 00:01:10 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:10.242457340Z" level=info msg="ignoring event" container=8e6389c3d921c90688a2e4c4e247f99261edac7ac16b37de83066be350f9d475 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:01:10 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:10.866214525Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:10 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:10.866370454Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:10 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:10.867613609Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:11 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:11.270397457Z" level=info msg="ignoring event" container=d1fb63dcdfcd94ef6f5d272828c1527e4782fd15dd1cd643972f67a8a958aadb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	d1fb63dcdfcd9       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   abf7b25d9ca8c
	998469abc2552       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   142702a0489a6
	cd605c9b8b838       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   562470f1edc6a
	0205641d17436       a4ca41631cc7a                                                                                    59 seconds ago       Running             coredns                     0                   c94891dfa4c62
	91ebce0851267       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   a4957d3849348
	88c2b8e191f66       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   b7fa674bf8856
	3e533a3b17d40       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   96df2802fa4d3
	279d2db5ac5e5       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   e53b21cb236db
	15dfe7450e920       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   01bc1c2f2bb3a
	
	* 
	* ==> coredns [0205641d1743] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220725165448-14919
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220725165448-14919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b
	                    minikube.k8s.io/name=embed-certs-20220725165448-14919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T17_00_43_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Jul 2022 00:00:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220725165448-14919
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Jul 2022 00:01:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Jul 2022 00:01:54 +0000   Tue, 26 Jul 2022 00:00:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Jul 2022 00:01:54 +0000   Tue, 26 Jul 2022 00:00:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Jul 2022 00:01:54 +0000   Tue, 26 Jul 2022 00:00:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Jul 2022 00:01:54 +0000   Tue, 26 Jul 2022 00:01:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220725165448-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                ff34fd86-8938-44ae-899e-d617c3d39649
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-d6xzg                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     61s
	  kube-system                 etcd-embed-certs-20220725165448-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-embed-certs-20220725165448-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-embed-certs-20220725165448-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-btzlf                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-embed-certs-20220725165448-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 metrics-server-5c6f97fb75-h9h79                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-s8h8w                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-kxp9z                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 60s                kube-proxy       
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x4 over 80s)  kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x3 over 80s)  kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x3 over 80s)  kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s                kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s                kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientPID
	  Normal  NodeReady                73s                kubelet          Node embed-certs-20220725165448-14919 status is now: NodeReady
	  Normal  RegisteredNode           62s                node-controller  Node embed-certs-20220725165448-14919 event: Registered Node embed-certs-20220725165448-14919 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [15dfe7450e92] <==
	* {"level":"info","ts":"2022-07-26T00:00:38.011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-26T00:00:38.012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-26T00:00:38.012Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-26T00:00:38.012Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-26T00:00:38.012Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-26T00:00:38.013Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:00:38.013Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220725165448-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:00:38.055Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-26T00:00:38.055Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-26T00:00:38.055Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-26T00:00:38.055Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:01:57 up  1:08,  0 users,  load average: 1.02, 0.90, 1.08
	Linux embed-certs-20220725165448-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [88c2b8e191f6] <==
	* I0726 00:00:41.694847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0726 00:00:41.956557       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0726 00:00:41.984296       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0726 00:00:42.040530       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0726 00:00:42.044561       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0726 00:00:42.045354       1 controller.go:611] quota admission added evaluator for: endpoints
	I0726 00:00:42.047934       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0726 00:00:42.830093       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0726 00:00:43.723854       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0726 00:00:43.729426       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0726 00:00:43.738601       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0726 00:00:43.822744       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0726 00:00:56.388774       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0726 00:00:56.488283       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0726 00:00:56.994122       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0726 00:00:58.945882       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.108.57.55]
	I0726 00:00:59.710815       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.109.250.165]
	I0726 00:00:59.720704       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.132.245]
	W0726 00:00:59.740422       1 handler_proxy.go:102] no RequestInfo found in the context
	W0726 00:00:59.740506       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:00:59.740526       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:00:59.740537       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0726 00:00:59.740554       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:00:59.741794       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3e533a3b17d4] <==
	* I0726 00:00:56.692674       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-d6xzg"
	I0726 00:00:56.755754       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0726 00:00:56.759345       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-swc44"
	I0726 00:00:58.730758       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0726 00:00:58.734080       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0726 00:00:58.737018       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0726 00:00:58.741559       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-h9h79"
	I0726 00:00:59.518887       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0726 00:00:59.523819       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:00:59.528696       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:00:59.529448       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0726 00:00:59.532618       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:00:59.532631       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:00:59.537260       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:00:59.537302       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:00:59.537315       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:00:59.540563       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:00:59.547236       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:00:59.547259       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:00:59.550096       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:00:59.550336       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:00:59.564729       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-kxp9z"
	I0726 00:00:59.564826       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-s8h8w"
	E0726 00:01:53.866257       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0726 00:01:53.873130       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [91ebce085126] <==
	* I0726 00:00:56.968103       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0726 00:00:56.968164       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0726 00:00:56.968183       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0726 00:00:56.990277       1 server_others.go:206] "Using iptables Proxier"
	I0726 00:00:56.990314       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0726 00:00:56.990321       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0726 00:00:56.990330       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0726 00:00:56.990348       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:00:56.990444       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:00:56.991455       1 server.go:661] "Version info" version="v1.24.3"
	I0726 00:00:56.991483       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:00:56.992019       1 config.go:317] "Starting service config controller"
	I0726 00:00:56.992051       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0726 00:00:56.992065       1 config.go:444] "Starting node config controller"
	I0726 00:00:56.992068       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0726 00:00:56.992715       1 config.go:226] "Starting endpoint slice config controller"
	I0726 00:00:56.992744       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0726 00:00:57.092185       1 shared_informer.go:262] Caches are synced for service config
	I0726 00:00:57.092248       1 shared_informer.go:262] Caches are synced for node config
	I0726 00:00:57.093676       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [279d2db5ac5e] <==
	* W0726 00:00:40.743721       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0726 00:00:40.743752       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0726 00:00:40.743965       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0726 00:00:40.743477       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:00:40.744380       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0726 00:00:40.744390       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:00:40.744076       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0726 00:00:40.744401       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0726 00:00:40.744410       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0726 00:00:40.744417       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0726 00:00:41.589070       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0726 00:00:41.589106       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0726 00:00:41.593409       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0726 00:00:41.593451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0726 00:00:41.610357       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0726 00:00:41.610426       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0726 00:00:41.620405       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0726 00:00:41.620455       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0726 00:00:41.788799       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0726 00:00:41.788836       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0726 00:00:41.809838       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0726 00:00:41.809876       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0726 00:00:41.832642       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:00:41.832679       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0726 00:00:43.732949       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:56:04 UTC, end at Tue 2022-07-26 00:01:58 UTC. --
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572107    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws5hr\" (UniqueName: \"kubernetes.io/projected/0451d129-9e25-448c-b4a6-6a160fa6d714-kube-api-access-ws5hr\") pod \"dashboard-metrics-scraper-dffd48c4c-s8h8w\" (UID: \"0451d129-9e25-448c-b4a6-6a160fa6d714\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-s8h8w"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572133    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs6tj\" (UniqueName: \"kubernetes.io/projected/8deb0ba6-2b1a-4818-8ebc-1c4404059440-kube-api-access-cs6tj\") pod \"kube-proxy-btzlf\" (UID: \"8deb0ba6-2b1a-4818-8ebc-1c4404059440\") " pod="kube-system/kube-proxy-btzlf"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572152    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8deb0ba6-2b1a-4818-8ebc-1c4404059440-xtables-lock\") pod \"kube-proxy-btzlf\" (UID: \"8deb0ba6-2b1a-4818-8ebc-1c4404059440\") " pod="kube-system/kube-proxy-btzlf"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572174    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e-config-volume\") pod \"coredns-6d4b75cb6d-d6xzg\" (UID: \"b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e\") " pod="kube-system/coredns-6d4b75cb6d-d6xzg"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572195    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8deb0ba6-2b1a-4818-8ebc-1c4404059440-kube-proxy\") pod \"kube-proxy-btzlf\" (UID: \"8deb0ba6-2b1a-4818-8ebc-1c4404059440\") " pod="kube-system/kube-proxy-btzlf"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572232    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8deb0ba6-2b1a-4818-8ebc-1c4404059440-lib-modules\") pod \"kube-proxy-btzlf\" (UID: \"8deb0ba6-2b1a-4818-8ebc-1c4404059440\") " pod="kube-system/kube-proxy-btzlf"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572312    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27p42\" (UniqueName: \"kubernetes.io/projected/548a2d46-6808-436e-98c4-b9f0e0c17662-kube-api-access-27p42\") pod \"storage-provisioner\" (UID: \"548a2d46-6808-436e-98c4-b9f0e0c17662\") " pod="kube-system/storage-provisioner"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572334    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzwd\" (UniqueName: \"kubernetes.io/projected/801a7dd2-dcd6-4bca-ad12-a098f6b4630f-kube-api-access-qkzwd\") pod \"metrics-server-5c6f97fb75-h9h79\" (UID: \"801a7dd2-dcd6-4bca-ad12-a098f6b4630f\") " pod="kube-system/metrics-server-5c6f97fb75-h9h79"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572423    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55753ac7-fd73-4470-be9e-0e5b0e8d250e-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-kxp9z\" (UID: \"55753ac7-fd73-4470-be9e-0e5b0e8d250e\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-kxp9z"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572549    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4hqt\" (UniqueName: \"kubernetes.io/projected/b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e-kube-api-access-b4hqt\") pod \"coredns-6d4b75cb6d-d6xzg\" (UID: \"b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e\") " pod="kube-system/coredns-6d4b75cb6d-d6xzg"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572603    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswqq\" (UniqueName: \"kubernetes.io/projected/55753ac7-fd73-4470-be9e-0e5b0e8d250e-kube-api-access-mswqq\") pod \"kubernetes-dashboard-5fd5574d9f-kxp9z\" (UID: \"55753ac7-fd73-4470-be9e-0e5b0e8d250e\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-kxp9z"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572621    9784 reconciler.go:157] "Reconciler: start to sync state"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:55.729701    9784 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220725165448-14919\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220725165448-14919"
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.128254    9784 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220725165448-14919\" already exists" pod="kube-system/etcd-embed-certs-20220725165448-14919"
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.327474    9784 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220725165448-14919\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220725165448-14919"
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:56.523281    9784 request.go:601] Waited for 1.050292898s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.528359    9784 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220725165448-14919\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220725165448-14919"
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.674732    9784 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.674894    9784 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e-config-volume podName:b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e nodeName:}" failed. No retries permitted until 2022-07-26 00:01:57.174866482 +0000 UTC m=+3.154435978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e-config-volume") pod "coredns-6d4b75cb6d-d6xzg" (UID: "b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e") : failed to sync configmap cache: timed out waiting for the condition
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.675174    9784 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.675384    9784 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8deb0ba6-2b1a-4818-8ebc-1c4404059440-kube-proxy podName:8deb0ba6-2b1a-4818-8ebc-1c4404059440 nodeName:}" failed. No retries permitted until 2022-07-26 00:01:57.175359374 +0000 UTC m=+3.154928869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8deb0ba6-2b1a-4818-8ebc-1c4404059440-kube-proxy") pod "kube-proxy-btzlf" (UID: "8deb0ba6-2b1a-4818-8ebc-1c4404059440") : failed to sync configmap cache: timed out waiting for the condition
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:58.043223    9784 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:58.043295    9784 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:58.043414    9784 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qkzwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-h9h79_kube-system(801a7dd2-dcd6-4bca-ad12-a098f6b4630f): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:58.043442    9784 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-h9h79" podUID=801a7dd2-dcd6-4bca-ad12-a098f6b4630f
	
	* 
	* ==> kubernetes-dashboard [998469abc255] <==
	* 2022/07/26 00:01:05 Using namespace: kubernetes-dashboard
	2022/07/26 00:01:05 Using in-cluster config to connect to apiserver
	2022/07/26 00:01:05 Using secret token for csrf signing
	2022/07/26 00:01:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/26 00:01:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/26 00:01:06 Successful initial request to the apiserver, version: v1.24.3
	2022/07/26 00:01:06 Generating JWE encryption key
	2022/07/26 00:01:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/26 00:01:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/26 00:01:06 Initializing JWE encryption key from synchronized object
	2022/07/26 00:01:06 Creating in-cluster Sidecar client
	2022/07/26 00:01:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/26 00:01:06 Serving insecurely on HTTP port: 9090
	2022/07/26 00:01:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/26 00:01:05 Starting overwatch
	
	* 
	* ==> storage-provisioner [cd605c9b8b83] <==
	* I0726 00:00:59.894404       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0726 00:00:59.905885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0726 00:00:59.905937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0726 00:00:59.911522       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0726 00:00:59.911794       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56f23d28-2a20-4c5d-a9f9-0ae9ce087809", APIVersion:"v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220725165448-14919_366cb3b1-65a6-4bd5-ae91-5a5581d3ab6d became leader
	I0726 00:00:59.911846       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220725165448-14919_366cb3b1-65a6-4bd5-ae91-5a5581d3ab6d!
	I0726 00:01:00.012988       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220725165448-14919_366cb3b1-65a6-4bd5-ae91-5a5581d3ab6d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220725165448-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-h9h79
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220725165448-14919 describe pod metrics-server-5c6f97fb75-h9h79
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220725165448-14919 describe pod metrics-server-5c6f97fb75-h9h79: exit status 1 (273.878997ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-h9h79" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220725165448-14919 describe pod metrics-server-5c6f97fb75-h9h79: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220725165448-14919
helpers_test.go:235: (dbg) docker inspect embed-certs-20220725165448-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32",
	        "Created": "2022-07-25T23:54:55.830914982Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 264743,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:56:04.753937421Z",
	            "FinishedAt": "2022-07-25T23:56:02.715447744Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32/hosts",
	        "LogPath": "/var/lib/docker/containers/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32/9b6e28a028ba4b98e3b647c1f273abe4f57e912127401e819a0e4e717c9c5f32-json.log",
	        "Name": "/embed-certs-20220725165448-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220725165448-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220725165448-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b47ddad077f42705cc10c763d70c555f888ae17e29bbf8a52530a710f53399d4-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b47ddad077f42705cc10c763d70c555f888ae17e29bbf8a52530a710f53399d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b47ddad077f42705cc10c763d70c555f888ae17e29bbf8a52530a710f53399d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b47ddad077f42705cc10c763d70c555f888ae17e29bbf8a52530a710f53399d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220725165448-14919",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220725165448-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220725165448-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220725165448-14919",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220725165448-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34897cc780983bc169e42596f514bea27e30f21721039af68db144c6f6f3aa9b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51310"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51311"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51312"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51313"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51314"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/34897cc78098",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220725165448-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9b6e28a028ba",
	                        "embed-certs-20220725165448-14919"
	                    ],
	                    "NetworkID": "ff1a660fe92dd6c2e75d32c3e09ef643890082fb32ed982be41e16b8bb608895",
	                    "EndpointID": "5f5f62b7a742d86b0fff95ab4ead516395300b7daa36b24d2407ea46a25970da",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220725165448-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220725165448-14919 logs -n 25: (2.58487473s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                | enable-default-cni-20220725163045-14919 | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT | 25 Jul 22 16:46 PDT |
	|         | enable-default-cni-20220725163045-14919           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:46 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725163045-14919            | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:47 PDT |
	|         | kubenet-20220725163045-14919                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:47 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:48 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:48 PDT | 25 Jul 22 16:53 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:50 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT | 25 Jul 22 16:51 PDT |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725164610-14919    | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919         | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725165448-14919        | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 16:56:03
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 16:56:03.433534   31337 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:56:03.433731   31337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:56:03.433737   31337 out.go:309] Setting ErrFile to fd 2...
	I0725 16:56:03.433741   31337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:56:03.433881   31337 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:56:03.434424   31337 out.go:303] Setting JSON to false
	I0725 16:56:03.449478   31337 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10286,"bootTime":1658783077,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:56:03.449569   31337 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:56:03.471556   31337 out.go:177] * [embed-certs-20220725165448-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:56:03.515487   31337 notify.go:193] Checking for updates...
	I0725 16:56:03.537285   31337 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:56:03.559095   31337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:56:03.580425   31337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:56:03.602303   31337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:56:03.625261   31337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:56:03.646919   31337 config.go:178] Loaded profile config "embed-certs-20220725165448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:56:03.647548   31337 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:56:03.716719   31337 docker.go:137] docker version: linux-20.10.17
	I0725 16:56:03.716857   31337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:56:03.850783   31337 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:56:03.793505502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:56:03.871916   31337 out.go:177] * Using the docker driver based on existing profile
	I0725 16:56:03.893953   31337 start.go:284] selected driver: docker
	I0725 16:56:03.893988   31337 start.go:808] validating driver "docker" against &{Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:03.894188   31337 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:56:03.897532   31337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:56:04.045703   31337 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:56:03.982785914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:56:04.045859   31337 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 16:56:04.045875   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:04.045886   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:04.045899   31337 start_flags.go:310] config:
	{Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:04.088356   31337 out.go:177] * Starting control plane node embed-certs-20220725165448-14919 in cluster embed-certs-20220725165448-14919
	I0725 16:56:04.109451   31337 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 16:56:04.130134   31337 out.go:177] * Pulling base image ...
	I0725 16:56:04.172375   31337 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 16:56:04.172376   31337 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:56:04.172427   31337 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 16:56:04.172439   31337 cache.go:57] Caching tarball of preloaded images
	I0725 16:56:04.172566   31337 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 16:56:04.172579   31337 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 16:56:04.173197   31337 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/config.json ...
	I0725 16:56:04.236416   31337 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 16:56:04.236434   31337 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 16:56:04.236446   31337 cache.go:208] Successfully downloaded all kic artifacts
	I0725 16:56:04.236526   31337 start.go:370] acquiring machines lock for embed-certs-20220725165448-14919: {Name:mkbc95d1eab1ca3410e49bf2a4e793a24fb963ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 16:56:04.236618   31337 start.go:374] acquired machines lock for "embed-certs-20220725165448-14919" in 73.505µs
	I0725 16:56:04.236655   31337 start.go:95] Skipping create...Using existing machine configuration
	I0725 16:56:04.236666   31337 fix.go:55] fixHost starting: 
	I0725 16:56:04.236886   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 16:56:04.304136   31337 fix.go:103] recreateIfNeeded on embed-certs-20220725165448-14919: state=Stopped err=<nil>
	W0725 16:56:04.304166   31337 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 16:56:04.346631   31337 out.go:177] * Restarting existing docker container for "embed-certs-20220725165448-14919" ...
	I0725 16:56:03.930815   30645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:03.940063   30645 kubeadm.go:630] restartCluster took 4m5.611815756s
	W0725 16:56:03.940157   30645 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0725 16:56:03.940174   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:56:04.371868   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:56:04.382270   30645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:04.391315   30645 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:56:04.391409   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:56:04.400006   30645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:56:04.400035   30645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:56:05.304425   30645 out.go:204]   - Generating certificates and keys ...
	I0725 16:56:04.367742   31337 cli_runner.go:164] Run: docker start embed-certs-20220725165448-14919
	I0725 16:56:04.744066   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 16:56:04.827385   31337 kic.go:415] container "embed-certs-20220725165448-14919" state is running.
	I0725 16:56:04.828035   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:04.912426   31337 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/config.json ...
	I0725 16:56:04.912942   31337 machine.go:88] provisioning docker machine ...
	I0725 16:56:04.912971   31337 ubuntu.go:169] provisioning hostname "embed-certs-20220725165448-14919"
	I0725 16:56:04.913056   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:04.999598   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:04.999819   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:04.999838   31337 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220725165448-14919 && echo "embed-certs-20220725165448-14919" | sudo tee /etc/hostname
	I0725 16:56:05.137366   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220725165448-14919
	
	I0725 16:56:05.137451   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.224934   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:05.225280   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:05.225297   31337 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220725165448-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220725165448-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220725165448-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 16:56:05.351826   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:56:05.351845   31337 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 16:56:05.351871   31337 ubuntu.go:177] setting up certificates
	I0725 16:56:05.351880   31337 provision.go:83] configureAuth start
	I0725 16:56:05.351957   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:05.433243   31337 provision.go:138] copyHostCerts
	I0725 16:56:05.433345   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 16:56:05.433355   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 16:56:05.433478   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 16:56:05.433791   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 16:56:05.433801   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 16:56:05.433872   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 16:56:05.434037   31337 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 16:56:05.434043   31337 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 16:56:05.434112   31337 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 16:56:05.434245   31337 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220725165448-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220725165448-14919]
	I0725 16:56:05.543085   31337 provision.go:172] copyRemoteCerts
	I0725 16:56:05.543159   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 16:56:05.543212   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.626756   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:05.718355   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 16:56:05.738285   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0725 16:56:05.769330   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 16:56:05.792698   31337 provision.go:86] duration metric: configureAuth took 440.796611ms
	I0725 16:56:05.792721   31337 ubuntu.go:193] setting minikube options for container-runtime
	I0725 16:56:05.792935   31337 config.go:178] Loaded profile config "embed-certs-20220725165448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:56:05.793007   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:05.872213   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:05.872420   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:05.872432   31337 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 16:56:05.994661   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 16:56:05.994679   31337 ubuntu.go:71] root file system type: overlay
	I0725 16:56:05.994840   31337 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 16:56:05.994916   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.071541   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:06.071747   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:06.071803   31337 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 16:56:06.201902   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 16:56:06.201994   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.274921   31337 main.go:134] libmachine: Using SSH client type: native
	I0725 16:56:06.275076   31337 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 51310 <nil> <nil>}
	I0725 16:56:06.275096   31337 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 16:56:06.403965   31337 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 16:56:06.403988   31337 machine.go:91] provisioned docker machine in 1.491027379s
	I0725 16:56:06.404000   31337 start.go:307] post-start starting for "embed-certs-20220725165448-14919" (driver="docker")
	I0725 16:56:06.404006   31337 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 16:56:06.404073   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 16:56:06.404133   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.476046   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.566386   31337 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 16:56:06.569878   31337 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 16:56:06.569892   31337 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 16:56:06.569898   31337 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 16:56:06.569903   31337 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 16:56:06.569913   31337 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 16:56:06.570034   31337 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 16:56:06.570192   31337 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 16:56:06.570362   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 16:56:06.577828   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:56:06.594791   31337 start.go:310] post-start completed in 190.779597ms
	I0725 16:56:06.594866   31337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:56:06.594916   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.669069   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.756422   31337 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 16:56:06.761025   31337 fix.go:57] fixHost completed within 2.524342859s
	I0725 16:56:06.761037   31337 start.go:82] releasing machines lock for "embed-certs-20220725165448-14919", held for 2.524394197s
	I0725 16:56:06.761113   31337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725165448-14919
	I0725 16:56:06.833722   31337 ssh_runner.go:195] Run: systemctl --version
	I0725 16:56:06.833735   31337 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 16:56:06.833788   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.833798   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:06.913090   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.916204   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 16:56:06.999674   31337 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 16:56:07.221803   31337 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 16:56:07.221878   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 16:56:07.233712   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 16:56:07.246547   31337 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 16:56:07.308561   31337 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 16:56:07.377049   31337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:56:07.439815   31337 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 16:56:07.676316   31337 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 16:56:07.755611   31337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 16:56:07.831651   31337 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 16:56:07.841040   31337 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 16:56:07.841101   31337 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 16:56:07.846451   31337 start.go:471] Will wait 60s for crictl version
	I0725 16:56:07.846501   31337 ssh_runner.go:195] Run: sudo crictl version
	I0725 16:56:07.944939   31337 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 16:56:07.945009   31337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:56:07.979201   31337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 16:56:05.808767   30645 out.go:204]   - Booting up control plane ...
	I0725 16:56:08.057107   31337 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 16:56:08.057277   31337 cli_runner.go:164] Run: docker exec -t embed-certs-20220725165448-14919 dig +short host.docker.internal
	I0725 16:56:08.186719   31337 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 16:56:08.186830   31337 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 16:56:08.191311   31337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:56:08.201156   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:08.275039   31337 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 16:56:08.275116   31337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:56:08.304877   31337 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 16:56:08.304899   31337 docker.go:542] Images already preloaded, skipping extraction
	I0725 16:56:08.304983   31337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 16:56:08.336195   31337 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 16:56:08.336253   31337 cache_images.go:84] Images are preloaded, skipping loading
	I0725 16:56:08.336397   31337 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 16:56:08.409222   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:08.409235   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:08.409251   31337 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 16:56:08.409279   31337 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220725165448-14919 NodeName:embed-certs-20220725165448-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 16:56:08.409450   31337 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220725165448-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 16:56:08.409534   31337 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220725165448-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 16:56:08.409594   31337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 16:56:08.417474   31337 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 16:56:08.417537   31337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 16:56:08.424560   31337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0725 16:56:08.437566   31337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 16:56:08.468744   31337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0725 16:56:08.481183   31337 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 16:56:08.484973   31337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 16:56:08.494671   31337 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919 for IP: 192.168.76.2
	I0725 16:56:08.494789   31337 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 16:56:08.494855   31337 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 16:56:08.495018   31337 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/client.key
	I0725 16:56:08.495092   31337 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.key.31bdca25
	I0725 16:56:08.495177   31337 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.key
	I0725 16:56:08.495477   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 16:56:08.495545   31337 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 16:56:08.495559   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 16:56:08.495593   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 16:56:08.495624   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 16:56:08.495653   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 16:56:08.495726   31337 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 16:56:08.496246   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 16:56:08.513745   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 16:56:08.531066   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 16:56:08.548205   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/embed-certs-20220725165448-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 16:56:08.566013   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 16:56:08.582490   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 16:56:08.599475   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 16:56:08.616680   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 16:56:08.633438   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 16:56:08.650322   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 16:56:08.667527   31337 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 16:56:08.684813   31337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 16:56:08.697928   31337 ssh_runner.go:195] Run: openssl version
	I0725 16:56:08.703211   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 16:56:08.710894   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.714829   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.714882   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 16:56:08.719947   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 16:56:08.728099   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 16:56:08.736150   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.740028   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.740070   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 16:56:08.745643   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 16:56:08.752922   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 16:56:08.760821   31337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.765131   31337 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.765176   31337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 16:56:08.770300   31337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 16:56:08.777357   31337 kubeadm.go:395] StartCluster: {Name:embed-certs-20220725165448-14919 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220725165448-14919 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:56:08.777464   31337 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:56:08.807200   31337 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 16:56:08.814843   31337 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 16:56:08.814862   31337 kubeadm.go:626] restartCluster start
	I0725 16:56:08.814913   31337 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 16:56:08.821469   31337 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:08.821534   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 16:56:08.897952   31337 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220725165448-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:56:08.898156   31337 kubeconfig.go:127] "embed-certs-20220725165448-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 16:56:08.898466   31337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 16:56:08.899825   31337 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 16:56:08.907910   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:08.907973   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:08.916840   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.118655   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.118753   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.129281   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.319023   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.319249   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.330056   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.517396   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.517539   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.528246   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.719033   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.719162   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.729548   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:09.919025   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:09.919173   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:09.929719   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.119141   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.119244   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.129805   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.318229   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.318452   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.328587   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.519054   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.519263   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.530051   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.719032   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.719238   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.729880   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:10.919240   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:10.919342   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:10.929774   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.117018   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.117113   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.126575   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.317191   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.317355   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.328052   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.519054   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.519269   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.529681   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.718964   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.719135   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.729819   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.917205   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.917274   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.925970   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.925980   31337 api_server.go:165] Checking apiserver status ...
	I0725 16:56:11.926026   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 16:56:11.934283   31337 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:11.934294   31337 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 16:56:11.934304   31337 kubeadm.go:1092] stopping kube-system containers ...
	I0725 16:56:11.934365   31337 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 16:56:11.964872   31337 docker.go:443] Stopping containers: [9a167f413b73 c2c372481520 fa18253e55a4 b4b22c2bf1f2 bd98a2b23e46 aae50f7a8dff 751586c3bb9b 8e494f6ee1bf 7d251a39f801 c3027cf7039f ed3d81f7d6d9 225d3bf16e2b 98c148ba1de9 fead1519fc44 f1baffe473a6 4f47378a827e]
	I0725 16:56:11.964950   31337 ssh_runner.go:195] Run: docker stop 9a167f413b73 c2c372481520 fa18253e55a4 b4b22c2bf1f2 bd98a2b23e46 aae50f7a8dff 751586c3bb9b 8e494f6ee1bf 7d251a39f801 c3027cf7039f ed3d81f7d6d9 225d3bf16e2b 98c148ba1de9 fead1519fc44 f1baffe473a6 4f47378a827e
	I0725 16:56:11.994922   31337 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 16:56:12.005330   31337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:56:12.013063   31337 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 25 23:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 25 23:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 25 23:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 25 23:55 /etc/kubernetes/scheduler.conf
	
	I0725 16:56:12.013113   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 16:56:12.020769   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 16:56:12.028247   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 16:56:12.035399   31337 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:12.035447   31337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 16:56:12.042273   31337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 16:56:12.049752   31337 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:56:12.049803   31337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 16:56:12.056784   31337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:12.064194   31337 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 16:56:12.064205   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:12.110551   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:12.991729   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.176129   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.230499   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:13.306926   31337 api_server.go:51] waiting for apiserver process to appear ...
	I0725 16:56:13.306998   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:13.818325   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.316810   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.816722   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:56:14.832982   31337 api_server.go:71] duration metric: took 1.526047531s to wait for apiserver process to appear ...
	I0725 16:56:14.833006   31337 api_server.go:87] waiting for apiserver healthz status ...
	I0725 16:56:14.833021   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:17.439565   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 16:56:17.439586   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 16:56:17.940421   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:17.947568   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 16:56:17.947582   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:56:18.439749   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:18.460813   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 16:56:18.460830   31337 api_server.go:102] status: https://127.0.0.1:51314/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 16:56:18.939728   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 16:56:18.948093   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 200:
	ok
	I0725 16:56:18.957429   31337 api_server.go:140] control plane version: v1.24.3
	I0725 16:56:18.957444   31337 api_server.go:130] duration metric: took 4.124403291s to wait for apiserver health ...
	I0725 16:56:18.957449   31337 cni.go:95] Creating CNI manager for ""
	I0725 16:56:18.957455   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 16:56:18.957467   31337 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 16:56:18.966151   31337 system_pods.go:59] 8 kube-system pods found
	I0725 16:56:18.966170   31337 system_pods.go:61] "coredns-6d4b75cb6d-brjzw" [7a073b93-7d6d-41af-bbc5-b6bb4ba61b61] Running
	I0725 16:56:18.966174   31337 system_pods.go:61] "etcd-embed-certs-20220725165448-14919" [35f46355-a412-4e3a-9e75-41fb9d357be2] Running
	I0725 16:56:18.966180   31337 system_pods.go:61] "kube-apiserver-embed-certs-20220725165448-14919" [b920b524-5ee8-47c8-ab93-078997c96a9d] Running
	I0725 16:56:18.966184   31337 system_pods.go:61] "kube-controller-manager-embed-certs-20220725165448-14919" [6bd916cf-3e22-4a72-8eea-ad9fc77fcdac] Running
	I0725 16:56:18.966190   31337 system_pods.go:61] "kube-proxy-qz466" [2436156a-42df-4487-bbf0-3723eaaefdfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 16:56:18.966197   31337 system_pods.go:61] "kube-scheduler-embed-certs-20220725165448-14919" [d4172f18-e47e-434b-aef2-c0c9dbab78d5] Running
	I0725 16:56:18.966205   31337 system_pods.go:61] "metrics-server-5c6f97fb75-dvwxz" [4be1f012-c669-4285-8fce-b98e892d097f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 16:56:18.966226   31337 system_pods.go:61] "storage-provisioner" [9a9f14a2-6357-4e11-9e55-238e2bc5349d] Running
	I0725 16:56:18.966241   31337 system_pods.go:74] duration metric: took 8.767149ms to wait for pod list to return data ...
	I0725 16:56:18.966251   31337 node_conditions.go:102] verifying NodePressure condition ...
	I0725 16:56:18.969371   31337 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 16:56:18.969384   31337 node_conditions.go:123] node cpu capacity is 6
	I0725 16:56:18.969392   31337 node_conditions.go:105] duration metric: took 3.137023ms to run NodePressure ...
	I0725 16:56:18.969403   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 16:56:19.130505   31337 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 16:56:19.134987   31337 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0725 16:56:19.418291   31337 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0725 16:56:19.990680   31337 kubeadm.go:777] kubelet initialised
	I0725 16:56:19.990692   31337 kubeadm.go:778] duration metric: took 860.168437ms waiting for restarted kubelet to initialise ...
	I0725 16:56:19.990701   31337 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 16:56:19.997037   31337 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:20.006432   31337 pod_ready.go:92] pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:20.006441   31337 pod_ready.go:81] duration metric: took 9.369186ms waiting for pod "coredns-6d4b75cb6d-brjzw" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:20.006448   31337 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:22.022967   31337 pod_ready.go:102] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:24.520791   31337 pod_ready.go:102] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:26.521281   31337 pod_ready.go:92] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:26.521294   31337 pod_ready.go:81] duration metric: took 6.514796336s waiting for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:26.521301   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.033931   31337 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.033944   31337 pod_ready.go:81] duration metric: took 512.6349ms waiting for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.033950   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.038066   31337 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.038074   31337 pod_ready.go:81] duration metric: took 4.11923ms waiting for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.038079   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qz466" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.042382   31337 pod_ready.go:92] pod "kube-proxy-qz466" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:27.042391   31337 pod_ready.go:81] duration metric: took 4.306864ms waiting for pod "kube-proxy-qz466" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:27.042397   31337 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:29.054332   31337 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:31.553231   31337 pod_ready.go:102] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:33.054275   31337 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 16:56:33.054288   31337 pod_ready.go:81] duration metric: took 6.011844144s waiting for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:33.054295   31337 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace to be "Ready" ...
	I0725 16:56:35.064195   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:37.065735   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:39.564369   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:41.565036   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:43.566029   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:46.066803   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:48.565574   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:50.567360   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:53.064054   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:55.064766   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:57.066535   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:56:59.565727   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:01.567296   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:04.067915   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:06.564528   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:08.567321   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:11.064570   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:13.065974   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:15.066410   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:17.565524   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:20.064374   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:22.066550   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:24.567486   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:26.568010   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:29.064670   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:31.065977   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:33.067605   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:35.565701   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:37.566461   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:40.067424   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:42.564117   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:44.566188   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:46.567544   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:49.065322   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:51.067604   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:53.567982   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:56.064199   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:57:58.066495   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	W0725 16:58:00.726845   30645 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 16:58:00.726876   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 16:58:01.152676   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:58:01.162348   30645 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 16:58:01.162398   30645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 16:58:01.169739   30645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 16:58:01.169757   30645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 16:58:01.932563   30645 out.go:204]   - Generating certificates and keys ...
	I0725 16:58:02.879021   30645 out.go:204]   - Booting up control plane ...
	I0725 16:58:00.067345   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:02.565160   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:05.066397   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:07.066907   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:09.564472   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:11.565607   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:14.064290   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:16.067942   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:18.568032   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:21.065165   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:23.065894   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:25.068053   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:27.568303   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:29.569270   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:32.067312   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:34.067798   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:36.567613   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:39.065477   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:41.067979   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:43.565007   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:45.566604   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:48.064632   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:50.067874   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:52.068045   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:54.568248   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:57.065466   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:58:59.065588   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:01.068271   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:03.564939   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:05.567021   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:08.066080   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:10.066132   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:12.067084   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:14.068876   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:16.566420   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:19.066562   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:21.066964   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:23.565970   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:26.067272   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:28.566308   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:31.065483   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:33.566418   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:36.066933   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:38.565560   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:40.566430   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:42.569077   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:45.068908   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:47.567704   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:50.068664   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:52.069481   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:57.797952   30645 kubeadm.go:397] StartCluster complete in 7m59.508645122s
	I0725 16:59:57.798033   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 16:59:57.827359   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.827371   30645 logs.go:276] No container was found matching "kube-apiserver"
	I0725 16:59:57.827433   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 16:59:57.857686   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.857699   30645 logs.go:276] No container was found matching "etcd"
	I0725 16:59:57.857755   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 16:59:57.887067   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.887079   30645 logs.go:276] No container was found matching "coredns"
	I0725 16:59:57.887137   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 16:59:57.916980   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.916992   30645 logs.go:276] No container was found matching "kube-scheduler"
	I0725 16:59:57.917054   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 16:59:57.946633   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.946646   30645 logs.go:276] No container was found matching "kube-proxy"
	I0725 16:59:57.946705   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 16:59:57.976302   30645 logs.go:274] 0 containers: []
	W0725 16:59:57.976314   30645 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 16:59:57.976371   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 16:59:58.006163   30645 logs.go:274] 0 containers: []
	W0725 16:59:58.006175   30645 logs.go:276] No container was found matching "storage-provisioner"
	I0725 16:59:58.006233   30645 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 16:59:58.034791   30645 logs.go:274] 0 containers: []
	W0725 16:59:58.034803   30645 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 16:59:58.034811   30645 logs.go:123] Gathering logs for kubelet ...
	I0725 16:59:58.034818   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 16:59:58.075762   30645 logs.go:123] Gathering logs for dmesg ...
	I0725 16:59:58.075777   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 16:59:58.087641   30645 logs.go:123] Gathering logs for describe nodes ...
	I0725 16:59:58.087653   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 16:59:58.142043   30645 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 16:59:58.142055   30645 logs.go:123] Gathering logs for Docker ...
	I0725 16:59:58.142062   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 16:59:58.156155   30645 logs.go:123] Gathering logs for container status ...
	I0725 16:59:58.156167   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 16:59:54.568030   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 16:59:56.569052   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:00.209432   30645 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053238365s)
	W0725 17:00:00.209581   30645 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 17:00:00.209596   30645 out.go:239] * 
	W0725 17:00:00.209762   30645 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 17:00:00.209776   30645 out.go:239] * 
	W0725 17:00:00.210311   30645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 17:00:00.272919   30645 out.go:177] 
	W0725 17:00:00.315153   30645 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 17:00:00.315316   30645 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 17:00:00.315414   30645 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 17:00:00.372884   30645 out.go:177] 
	I0725 16:59:59.068427   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:01.567601   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:04.065736   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:06.066221   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:08.068476   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:10.068614   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:12.068934   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:14.568007   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:16.568732   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:19.068149   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:21.567711   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:24.065850   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:26.068727   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:28.568827   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:31.068963   31337 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace has status "Ready":"False"
	I0725 17:00:33.060983   31337 pod_ready.go:81] duration metric: took 4m0.00492833s waiting for pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace to be "Ready" ...
	E0725 17:00:33.061007   31337 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-dvwxz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 17:00:33.061024   31337 pod_ready.go:38] duration metric: took 4m13.06855299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:00:33.061067   31337 kubeadm.go:630] restartCluster took 4m24.244360087s
	W0725 17:00:33.061193   31337 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 17:00:33.061224   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 17:00:35.469010   31337 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.40775314s)
	I0725 17:00:35.469071   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:00:35.478242   31337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:00:35.486244   31337 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 17:00:35.486305   31337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:00:35.493582   31337 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 17:00:35.493607   31337 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 17:00:35.774076   31337 out.go:204]   - Generating certificates and keys ...
	I0725 17:00:36.489304   31337 out.go:204]   - Booting up control plane ...
	I0725 17:00:43.532995   31337 out.go:204]   - Configuring RBAC rules ...
	I0725 17:00:43.910442   31337 cni.go:95] Creating CNI manager for ""
	I0725 17:00:43.910470   31337 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:00:43.910508   31337 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:00:43.910631   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b minikube.k8s.io/name=embed-certs-20220725165448-14919 minikube.k8s.io/updated_at=2022_07_25T17_00_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:43.910632   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:44.050939   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:44.115011   31337 ops.go:34] apiserver oom_adj: -16
	I0725 17:00:44.651229   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:45.151189   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:45.650666   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:46.150900   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:46.650738   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:47.150365   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:47.650430   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:48.151145   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:48.651175   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:49.151341   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:49.652492   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:50.150623   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:50.650515   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:51.151780   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:51.650676   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:52.151196   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:52.650459   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:53.150583   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:53.650428   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:54.150525   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:54.651147   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:55.152508   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:55.652544   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:56.150422   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:56.650515   31337 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:00:56.724309   31337 kubeadm.go:1045] duration metric: took 12.813699078s to wait for elevateKubeSystemPrivileges.
	I0725 17:00:56.724324   31337 kubeadm.go:397] StartCluster complete in 4m47.944971599s
	I0725 17:00:56.724338   31337 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:00:56.724416   31337 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:00:56.725236   31337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:00:57.240866   31337 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220725165448-14919" rescaled to 1
	I0725 17:00:57.240941   31337 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 17:00:57.240963   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:00:57.240989   31337 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 17:00:57.241141   31337 config.go:178] Loaded profile config "embed-certs-20220725165448-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:00:57.264201   31337 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220725165448-14919"
	I0725 17:00:57.264201   31337 addons.go:65] Setting dashboard=true in profile "embed-certs-20220725165448-14919"
	I0725 17:00:57.264203   31337 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220725165448-14919"
	I0725 17:00:57.264107   31337 out.go:177] * Verifying Kubernetes components...
	I0725 17:00:57.264219   31337 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220725165448-14919"
	I0725 17:00:57.264218   31337 addons.go:153] Setting addon dashboard=true in "embed-certs-20220725165448-14919"
	W0725 17:00:57.284986   31337 addons.go:162] addon dashboard should already be in state true
	I0725 17:00:57.284988   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:00:57.264220   31337 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220725165448-14919"
	I0725 17:00:57.264227   31337 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220725165448-14919"
	W0725 17:00:57.264229   31337 addons.go:162] addon storage-provisioner should already be in state true
	I0725 17:00:57.285060   31337 host.go:66] Checking if "embed-certs-20220725165448-14919" exists ...
	I0725 17:00:57.285068   31337 host.go:66] Checking if "embed-certs-20220725165448-14919" exists ...
	I0725 17:00:57.285070   31337 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220725165448-14919"
	W0725 17:00:57.285083   31337 addons.go:162] addon metrics-server should already be in state true
	I0725 17:00:57.285117   31337 host.go:66] Checking if "embed-certs-20220725165448-14919" exists ...
	I0725 17:00:57.285457   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.285592   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.285671   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.286385   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.415998   31337 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:00:57.399628   31337 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220725165448-14919"
	I0725 17:00:57.413972   31337 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 17:00:57.413983   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	W0725 17:00:57.416042   31337 addons.go:162] addon default-storageclass should already be in state true
	I0725 17:00:57.494630   31337 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 17:00:57.458016   31337 host.go:66] Checking if "embed-certs-20220725165448-14919" exists ...
	I0725 17:00:57.458063   31337 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:00:57.495429   31337 cli_runner.go:164] Run: docker container inspect embed-certs-20220725165448-14919 --format={{.State.Status}}
	I0725 17:00:57.515856   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:00:57.515966   31337 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:00:57.536794   31337 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 17:00:57.536890   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:00:57.536991   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 17:00:57.537137   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 17:00:57.610879   31337 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 17:00:57.649149   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 17:00:57.649174   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 17:00:57.649300   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 17:00:57.657976   31337 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220725165448-14919" to be "Ready" ...
	I0725 17:00:57.674072   31337 node_ready.go:49] node "embed-certs-20220725165448-14919" has status "Ready":"True"
	I0725 17:00:57.674087   31337 node_ready.go:38] duration metric: took 16.048164ms waiting for node "embed-certs-20220725165448-14919" to be "Ready" ...
	I0725 17:00:57.674097   31337 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:00:57.684401   31337 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-d6xzg" in "kube-system" namespace to be "Ready" ...
	I0725 17:00:57.685652   31337 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:00:57.685687   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:00:57.685773   31337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725165448-14919
	I0725 17:00:57.688461   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 17:00:57.690768   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 17:00:57.757363   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 17:00:57.783371   31337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51310 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/embed-certs-20220725165448-14919/id_rsa Username:docker}
	I0725 17:00:57.911602   31337 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:00:57.911614   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 17:00:57.917698   31337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:00:57.999960   31337 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:00:57.999986   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:00:58.018084   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 17:00:58.018102   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 17:00:58.022599   31337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:00:58.191357   31337 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:00:58.191380   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:00:58.200313   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 17:00:58.200332   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 17:00:58.223838   31337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:00:58.227495   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 17:00:58.227511   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 17:00:58.313210   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 17:00:58.313243   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 17:00:58.394540   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 17:00:58.394558   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 17:00:58.419464   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 17:00:58.419493   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 17:00:58.439397   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 17:00:58.457443   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 17:00:58.508592   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 17:00:58.508610   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 17:00:58.529325   31337 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:00:58.529341   31337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 17:00:58.612281   31337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:00:58.617864   31337 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.201784552s)
	I0725 17:00:58.617897   31337 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 17:00:58.941047   31337 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220725165448-14919"
	I0725 17:00:59.701015   31337 pod_ready.go:92] pod "coredns-6d4b75cb6d-d6xzg" in "kube-system" namespace has status "Ready":"True"
	I0725 17:00:59.701031   31337 pod_ready.go:81] duration metric: took 2.016584043s waiting for pod "coredns-6d4b75cb6d-d6xzg" in "kube-system" namespace to be "Ready" ...
	I0725 17:00:59.701042   31337 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace to be "Ready" ...
	I0725 17:00:59.714761   31337 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.102440248s)
	I0725 17:00:59.740509   31337 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 17:00:59.782440   31337 addons.go:414] enableAddons completed in 2.541440275s
	I0725 17:01:01.714525   31337 pod_ready.go:102] pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace has status "Ready":"False"
	I0725 17:01:04.210103   31337 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-swc44" not found
	I0725 17:01:04.210118   31337 pod_ready.go:81] duration metric: took 4.509032206s waiting for pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace to be "Ready" ...
	E0725 17:01:04.210124   31337 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-swc44" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-swc44" not found
	I0725 17:01:04.210143   31337 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.214615   31337 pod_ready.go:92] pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.214624   31337 pod_ready.go:81] duration metric: took 4.473276ms waiting for pod "etcd-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.214630   31337 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.219336   31337 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.219346   31337 pod_ready.go:81] duration metric: took 4.71087ms waiting for pod "kube-apiserver-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.219353   31337 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.223633   31337 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.223643   31337 pod_ready.go:81] duration metric: took 4.283359ms waiting for pod "kube-controller-manager-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.223655   31337 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btzlf" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.227898   31337 pod_ready.go:92] pod "kube-proxy-btzlf" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.227908   31337 pod_ready.go:81] duration metric: took 4.247966ms waiting for pod "kube-proxy-btzlf" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.227915   31337 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.410762   31337 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:01:04.410772   31337 pod_ready.go:81] duration metric: took 182.850933ms waiting for pod "kube-scheduler-embed-certs-20220725165448-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:01:04.410778   31337 pod_ready.go:38] duration metric: took 6.73660784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:01:04.410794   31337 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:01:04.410850   31337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:01:04.422704   31337 api_server.go:71] duration metric: took 7.181690097s to wait for apiserver process to appear ...
	I0725 17:01:04.422724   31337 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:01:04.422734   31337 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51314/healthz ...
	I0725 17:01:04.429197   31337 api_server.go:266] https://127.0.0.1:51314/healthz returned 200:
	ok
	I0725 17:01:04.430620   31337 api_server.go:140] control plane version: v1.24.3
	I0725 17:01:04.430630   31337 api_server.go:130] duration metric: took 7.90082ms to wait for apiserver health ...
	I0725 17:01:04.430635   31337 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:01:04.612902   31337 system_pods.go:59] 8 kube-system pods found
	I0725 17:01:04.612916   31337 system_pods.go:61] "coredns-6d4b75cb6d-d6xzg" [b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e] Running
	I0725 17:01:04.612921   31337 system_pods.go:61] "etcd-embed-certs-20220725165448-14919" [b4a7df5c-f7c3-401a-aae5-9282b70074bb] Running
	I0725 17:01:04.612924   31337 system_pods.go:61] "kube-apiserver-embed-certs-20220725165448-14919" [227f6a1e-3c8a-46d1-9c24-3665f3597f8c] Running
	I0725 17:01:04.612936   31337 system_pods.go:61] "kube-controller-manager-embed-certs-20220725165448-14919" [03e9614c-bbb2-41ce-a7bd-6f478a7ee2a9] Running
	I0725 17:01:04.612940   31337 system_pods.go:61] "kube-proxy-btzlf" [8deb0ba6-2b1a-4818-8ebc-1c4404059440] Running
	I0725 17:01:04.612944   31337 system_pods.go:61] "kube-scheduler-embed-certs-20220725165448-14919" [d684baa2-8a97-44a7-864a-1881f3ee5af9] Running
	I0725 17:01:04.612955   31337 system_pods.go:61] "metrics-server-5c6f97fb75-h9h79" [801a7dd2-dcd6-4bca-ad12-a098f6b4630f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:01:04.612962   31337 system_pods.go:61] "storage-provisioner" [548a2d46-6808-436e-98c4-b9f0e0c17662] Running
	I0725 17:01:04.612965   31337 system_pods.go:74] duration metric: took 182.326031ms to wait for pod list to return data ...
	I0725 17:01:04.612970   31337 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:01:04.810529   31337 default_sa.go:45] found service account: "default"
	I0725 17:01:04.810540   31337 default_sa.go:55] duration metric: took 197.564551ms for default service account to be created ...
	I0725 17:01:04.810545   31337 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:01:05.013205   31337 system_pods.go:86] 8 kube-system pods found
	I0725 17:01:05.013219   31337 system_pods.go:89] "coredns-6d4b75cb6d-d6xzg" [b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e] Running
	I0725 17:01:05.013224   31337 system_pods.go:89] "etcd-embed-certs-20220725165448-14919" [b4a7df5c-f7c3-401a-aae5-9282b70074bb] Running
	I0725 17:01:05.013228   31337 system_pods.go:89] "kube-apiserver-embed-certs-20220725165448-14919" [227f6a1e-3c8a-46d1-9c24-3665f3597f8c] Running
	I0725 17:01:05.013234   31337 system_pods.go:89] "kube-controller-manager-embed-certs-20220725165448-14919" [03e9614c-bbb2-41ce-a7bd-6f478a7ee2a9] Running
	I0725 17:01:05.013237   31337 system_pods.go:89] "kube-proxy-btzlf" [8deb0ba6-2b1a-4818-8ebc-1c4404059440] Running
	I0725 17:01:05.013241   31337 system_pods.go:89] "kube-scheduler-embed-certs-20220725165448-14919" [d684baa2-8a97-44a7-864a-1881f3ee5af9] Running
	I0725 17:01:05.013263   31337 system_pods.go:89] "metrics-server-5c6f97fb75-h9h79" [801a7dd2-dcd6-4bca-ad12-a098f6b4630f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:01:05.013267   31337 system_pods.go:89] "storage-provisioner" [548a2d46-6808-436e-98c4-b9f0e0c17662] Running
	I0725 17:01:05.013271   31337 system_pods.go:126] duration metric: took 202.721689ms to wait for k8s-apps to be running ...
	I0725 17:01:05.013275   31337 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:01:05.013331   31337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:01:05.025516   31337 system_svc.go:56] duration metric: took 12.235142ms WaitForService to wait for kubelet.
	I0725 17:01:05.025533   31337 kubeadm.go:572] duration metric: took 7.784518192s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 17:01:05.025561   31337 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:01:05.211027   31337 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:01:05.211042   31337 node_conditions.go:123] node cpu capacity is 6
	I0725 17:01:05.211049   31337 node_conditions.go:105] duration metric: took 185.481124ms to run NodePressure ...
	I0725 17:01:05.211066   31337 start.go:216] waiting for startup goroutines ...
	I0725 17:01:05.246259   31337 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 17:01:05.271161   31337 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220725165448-14919" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:56:04 UTC, end at Tue 2022-07-26 00:02:00 UTC. --
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.531690086Z" level=info msg="ignoring event" container=f51838bbe79569817cec8830f282d528905e348576fa586265213300ef1006fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.599835692Z" level=info msg="ignoring event" container=5360920e9165fb5fc1ea74da954afeb323da7f72e5b84848e646e5cb288208b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.665761682Z" level=info msg="ignoring event" container=7406ae2e4a6cc4904cb0f26bef5b440d1cc525badbc686ef11819b4421f1d2df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.746469967Z" level=info msg="ignoring event" container=bb09fee3656978a5ae31c2ed1653b76935ca1a760acd0bb2254ced3361b6315b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.812720126Z" level=info msg="ignoring event" container=1a211623a0f3a3b3c7433953256eac67f5740baec04d1ff90750a1157a654730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:34 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:34.935840230Z" level=info msg="ignoring event" container=d9f0ab99940b7d09e1791a342441dc5eabdbcd63dbd8ed2dfd3e544ab1c2fb75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:35 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:35.005065305Z" level=info msg="ignoring event" container=dea0951dc2f8669e95f269d52186b5cacca8c31f5e9a9c49b2fb50abbc53f332 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:35 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:35.100776984Z" level=info msg="ignoring event" container=918eacec4c03004ec202e37b369028e5b51e7e41212875ae7d1b8e3123c5ae49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:00:59 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:59.439108257Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:00:59 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:59.439155872Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:00:59 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:00:59.440466923Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:00 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:00.729531071Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 26 00:01:03 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:03.342003464Z" level=info msg="ignoring event" container=0bf7b88994572a2a72f9f4887796421435cd8f2bde611adb11b590e653d34804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:01:03 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:03.518785392Z" level=info msg="ignoring event" container=fd641746d2695ac7f53aec36776b6e9b218bf6590a3e71346e7135f37019f94c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:01:06 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:06.485766887Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 26 00:01:06 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:06.809369781Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 26 00:01:10 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:10.242457340Z" level=info msg="ignoring event" container=8e6389c3d921c90688a2e4c4e247f99261edac7ac16b37de83066be350f9d475 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:01:10 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:10.866214525Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:10 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:10.866370454Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:10 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:10.867613609Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:11 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:11.270397457Z" level=info msg="ignoring event" container=d1fb63dcdfcd94ef6f5d272828c1527e4782fd15dd1cd643972f67a8a958aadb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:01:58 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:58.041136479Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:58 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:58.041197761Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:58 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:58.042498029Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:01:58 embed-certs-20220725165448-14919 dockerd[563]: time="2022-07-26T00:01:58.530324605Z" level=info msg="ignoring event" container=f0baa2f6615372c5be04c6277a5e6aafd5fdabefa5ad3398a7281b4c85c75532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	f0baa2f661537       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   abf7b25d9ca8c
	998469abc2552       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   56 seconds ago       Running             kubernetes-dashboard        0                   142702a0489a6
	cd605c9b8b838       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   562470f1edc6a
	0205641d17436       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   c94891dfa4c62
	91ebce0851267       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   a4957d3849348
	88c2b8e191f66       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   b7fa674bf8856
	3e533a3b17d40       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   96df2802fa4d3
	279d2db5ac5e5       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   e53b21cb236db
	15dfe7450e920       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   01bc1c2f2bb3a
	
	* 
	* ==> coredns [0205641d1743] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220725165448-14919
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220725165448-14919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b
	                    minikube.k8s.io/name=embed-certs-20220725165448-14919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T17_00_43_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Jul 2022 00:00:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220725165448-14919
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Jul 2022 00:01:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Jul 2022 00:01:54 +0000   Tue, 26 Jul 2022 00:00:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Jul 2022 00:01:54 +0000   Tue, 26 Jul 2022 00:00:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Jul 2022 00:01:54 +0000   Tue, 26 Jul 2022 00:00:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Jul 2022 00:01:54 +0000   Tue, 26 Jul 2022 00:01:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220725165448-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                ff34fd86-8938-44ae-899e-d617c3d39649
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-d6xzg                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     65s
	  kube-system                 etcd-embed-certs-20220725165448-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-embed-certs-20220725165448-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-embed-certs-20220725165448-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-btzlf                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-embed-certs-20220725165448-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 metrics-server-5c6f97fb75-h9h79                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-s8h8w                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-kxp9z                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x4 over 84s)  kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x3 over 84s)  kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x3 over 84s)  kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s                kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientPID
	  Normal  NodeReady                77s                kubelet          Node embed-certs-20220725165448-14919 status is now: NodeReady
	  Normal  RegisteredNode           66s                node-controller  Node embed-certs-20220725165448-14919 event: Registered Node embed-certs-20220725165448-14919 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet          Node embed-certs-20220725165448-14919 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [15dfe7450e92] <==
	* {"level":"info","ts":"2022-07-26T00:00:38.011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-26T00:00:38.012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-26T00:00:38.012Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-26T00:00:38.012Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-26T00:00:38.012Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-26T00:00:38.013Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:00:38.013Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:00:38.053Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220725165448-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:00:38.054Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:00:38.055Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-26T00:00:38.055Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-26T00:00:38.055Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-26T00:00:38.055Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:02:01 up  1:08,  0 users,  load average: 0.94, 0.88, 1.07
	Linux embed-certs-20220725165448-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [88c2b8e191f6] <==
	* I0726 00:00:42.830093       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0726 00:00:43.723854       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0726 00:00:43.729426       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0726 00:00:43.738601       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0726 00:00:43.822744       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0726 00:00:56.388774       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0726 00:00:56.488283       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0726 00:00:56.994122       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0726 00:00:58.945882       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.108.57.55]
	I0726 00:00:59.710815       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.109.250.165]
	I0726 00:00:59.720704       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.132.245]
	W0726 00:00:59.740422       1 handler_proxy.go:102] no RequestInfo found in the context
	W0726 00:00:59.740506       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:00:59.740526       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:00:59.740537       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0726 00:00:59.740554       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:00:59.741794       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0726 00:01:59.699012       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:01:59.699073       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:01:59.699080       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0726 00:01:59.699181       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:01:59.699227       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:01:59.701215       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3e533a3b17d4] <==
	* I0726 00:00:56.692674       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-d6xzg"
	I0726 00:00:56.755754       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0726 00:00:56.759345       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-swc44"
	I0726 00:00:58.730758       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0726 00:00:58.734080       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0726 00:00:58.737018       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0726 00:00:58.741559       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-h9h79"
	I0726 00:00:59.518887       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0726 00:00:59.523819       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:00:59.528696       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:00:59.529448       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0726 00:00:59.532618       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:00:59.532631       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:00:59.537260       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:00:59.537302       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:00:59.537315       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:00:59.540563       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:00:59.547236       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:00:59.547259       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:00:59.550096       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:00:59.550336       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:00:59.564729       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-kxp9z"
	I0726 00:00:59.564826       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-s8h8w"
	E0726 00:01:53.866257       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0726 00:01:53.873130       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [91ebce085126] <==
	* I0726 00:00:56.968103       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0726 00:00:56.968164       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0726 00:00:56.968183       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0726 00:00:56.990277       1 server_others.go:206] "Using iptables Proxier"
	I0726 00:00:56.990314       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0726 00:00:56.990321       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0726 00:00:56.990330       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0726 00:00:56.990348       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:00:56.990444       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:00:56.991455       1 server.go:661] "Version info" version="v1.24.3"
	I0726 00:00:56.991483       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:00:56.992019       1 config.go:317] "Starting service config controller"
	I0726 00:00:56.992051       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0726 00:00:56.992065       1 config.go:444] "Starting node config controller"
	I0726 00:00:56.992068       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0726 00:00:56.992715       1 config.go:226] "Starting endpoint slice config controller"
	I0726 00:00:56.992744       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0726 00:00:57.092185       1 shared_informer.go:262] Caches are synced for service config
	I0726 00:00:57.092248       1 shared_informer.go:262] Caches are synced for node config
	I0726 00:00:57.093676       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [279d2db5ac5e] <==
	* W0726 00:00:40.743721       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0726 00:00:40.743752       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0726 00:00:40.743965       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0726 00:00:40.743477       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:00:40.744380       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0726 00:00:40.744390       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:00:40.744076       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0726 00:00:40.744401       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0726 00:00:40.744410       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0726 00:00:40.744417       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0726 00:00:41.589070       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0726 00:00:41.589106       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0726 00:00:41.593409       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0726 00:00:41.593451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0726 00:00:41.610357       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0726 00:00:41.610426       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0726 00:00:41.620405       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0726 00:00:41.620455       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0726 00:00:41.788799       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0726 00:00:41.788836       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0726 00:00:41.809838       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0726 00:00:41.809876       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0726 00:00:41.832642       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:00:41.832679       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0726 00:00:43.732949       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:56:04 UTC, end at Tue 2022-07-26 00:02:02 UTC. --
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572195    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8deb0ba6-2b1a-4818-8ebc-1c4404059440-kube-proxy\") pod \"kube-proxy-btzlf\" (UID: \"8deb0ba6-2b1a-4818-8ebc-1c4404059440\") " pod="kube-system/kube-proxy-btzlf"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572232    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8deb0ba6-2b1a-4818-8ebc-1c4404059440-lib-modules\") pod \"kube-proxy-btzlf\" (UID: \"8deb0ba6-2b1a-4818-8ebc-1c4404059440\") " pod="kube-system/kube-proxy-btzlf"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572312    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27p42\" (UniqueName: \"kubernetes.io/projected/548a2d46-6808-436e-98c4-b9f0e0c17662-kube-api-access-27p42\") pod \"storage-provisioner\" (UID: \"548a2d46-6808-436e-98c4-b9f0e0c17662\") " pod="kube-system/storage-provisioner"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572334    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzwd\" (UniqueName: \"kubernetes.io/projected/801a7dd2-dcd6-4bca-ad12-a098f6b4630f-kube-api-access-qkzwd\") pod \"metrics-server-5c6f97fb75-h9h79\" (UID: \"801a7dd2-dcd6-4bca-ad12-a098f6b4630f\") " pod="kube-system/metrics-server-5c6f97fb75-h9h79"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572423    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55753ac7-fd73-4470-be9e-0e5b0e8d250e-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-kxp9z\" (UID: \"55753ac7-fd73-4470-be9e-0e5b0e8d250e\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-kxp9z"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572549    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4hqt\" (UniqueName: \"kubernetes.io/projected/b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e-kube-api-access-b4hqt\") pod \"coredns-6d4b75cb6d-d6xzg\" (UID: \"b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e\") " pod="kube-system/coredns-6d4b75cb6d-d6xzg"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572603    9784 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswqq\" (UniqueName: \"kubernetes.io/projected/55753ac7-fd73-4470-be9e-0e5b0e8d250e-kube-api-access-mswqq\") pod \"kubernetes-dashboard-5fd5574d9f-kxp9z\" (UID: \"55753ac7-fd73-4470-be9e-0e5b0e8d250e\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-kxp9z"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:55.572621    9784 reconciler.go:157] "Reconciler: start to sync state"
	Jul 26 00:01:55 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:55.729701    9784 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220725165448-14919\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220725165448-14919"
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.128254    9784 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220725165448-14919\" already exists" pod="kube-system/etcd-embed-certs-20220725165448-14919"
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.327474    9784 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220725165448-14919\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220725165448-14919"
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:56.523281    9784 request.go:601] Waited for 1.050292898s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.528359    9784 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220725165448-14919\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220725165448-14919"
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.674732    9784 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.674894    9784 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e-config-volume podName:b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e nodeName:}" failed. No retries permitted until 2022-07-26 00:01:57.174866482 +0000 UTC m=+3.154435978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e-config-volume") pod "coredns-6d4b75cb6d-d6xzg" (UID: "b18aa3f6-ba3f-40fe-9e4e-379db8ab9e9e") : failed to sync configmap cache: timed out waiting for the condition
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.675174    9784 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jul 26 00:01:56 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:56.675384    9784 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8deb0ba6-2b1a-4818-8ebc-1c4404059440-kube-proxy podName:8deb0ba6-2b1a-4818-8ebc-1c4404059440 nodeName:}" failed. No retries permitted until 2022-07-26 00:01:57.175359374 +0000 UTC m=+3.154928869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8deb0ba6-2b1a-4818-8ebc-1c4404059440-kube-proxy") pod "kube-proxy-btzlf" (UID: "8deb0ba6-2b1a-4818-8ebc-1c4404059440") : failed to sync configmap cache: timed out waiting for the condition
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:58.043223    9784 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:58.043295    9784 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:58.043414    9784 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qkzwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-h9h79_kube-system(801a7dd2-dcd6-4bca-ad12-a098f6b4630f): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:58.043442    9784 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-h9h79" podUID=801a7dd2-dcd6-4bca-ad12-a098f6b4630f
	Jul 26 00:01:58 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:58.225954    9784 scope.go:110] "RemoveContainer" containerID="d1fb63dcdfcd94ef6f5d272828c1527e4782fd15dd1cd643972f67a8a958aadb"
	Jul 26 00:01:59 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:59.526267    9784 scope.go:110] "RemoveContainer" containerID="d1fb63dcdfcd94ef6f5d272828c1527e4782fd15dd1cd643972f67a8a958aadb"
	Jul 26 00:01:59 embed-certs-20220725165448-14919 kubelet[9784]: I0726 00:01:59.526928    9784 scope.go:110] "RemoveContainer" containerID="f0baa2f6615372c5be04c6277a5e6aafd5fdabefa5ad3398a7281b4c85c75532"
	Jul 26 00:01:59 embed-certs-20220725165448-14919 kubelet[9784]: E0726 00:01:59.527140    9784 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-s8h8w_kubernetes-dashboard(0451d129-9e25-448c-b4a6-6a160fa6d714)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-s8h8w" podUID=0451d129-9e25-448c-b4a6-6a160fa6d714
	
	* 
	* ==> kubernetes-dashboard [998469abc255] <==
	* 2022/07/26 00:01:05 Using namespace: kubernetes-dashboard
	2022/07/26 00:01:05 Using in-cluster config to connect to apiserver
	2022/07/26 00:01:05 Using secret token for csrf signing
	2022/07/26 00:01:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/26 00:01:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/26 00:01:06 Successful initial request to the apiserver, version: v1.24.3
	2022/07/26 00:01:06 Generating JWE encryption key
	2022/07/26 00:01:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/26 00:01:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/26 00:01:06 Initializing JWE encryption key from synchronized object
	2022/07/26 00:01:06 Creating in-cluster Sidecar client
	2022/07/26 00:01:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/26 00:01:06 Serving insecurely on HTTP port: 9090
	2022/07/26 00:01:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/26 00:01:05 Starting overwatch
	
	* 
	* ==> storage-provisioner [cd605c9b8b83] <==
	* I0726 00:00:59.894404       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0726 00:00:59.905885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0726 00:00:59.905937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0726 00:00:59.911522       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0726 00:00:59.911794       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56f23d28-2a20-4c5d-a9f9-0ae9ce087809", APIVersion:"v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220725165448-14919_366cb3b1-65a6-4bd5-ae91-5a5581d3ab6d became leader
	I0726 00:00:59.911846       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220725165448-14919_366cb3b1-65a6-4bd5-ae91-5a5581d3ab6d!
	I0726 00:01:00.012988       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220725165448-14919_366cb3b1-65a6-4bd5-ae91-5a5581d3ab6d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220725165448-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-h9h79
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220725165448-14919 describe pod metrics-server-5c6f97fb75-h9h79
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220725165448-14919 describe pod metrics-server-5c6f97fb75-h9h79: exit status 1 (275.584325ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-h9h79" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220725165448-14919 describe pod metrics-server-5c6f97fb75-h9h79: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (43.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (43.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220725170207-14919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919
E0725 17:08:41.276506   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919: exit status 2 (16.109166425s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919: exit status 2 (16.109856526s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220725170207-14919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220725170207-14919 --alsologtostderr -v=1: (1.010255011s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220725170207-14919
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220725170207-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016",
	        "Created": "2022-07-26T00:02:15.01401926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-26T00:03:19.660246735Z",
	            "FinishedAt": "2022-07-26T00:03:17.680795745Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016/hostname",
	        "HostsPath": "/var/lib/docker/containers/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016/hosts",
	        "LogPath": "/var/lib/docker/containers/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016-json.log",
	        "Name": "/default-k8s-different-port-20220725170207-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220725170207-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220725170207-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3463a3777ee8209a745216ab6489f4737e7fe0cdb7bc79dd7cef91112e447418-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3463a3777ee8209a745216ab6489f4737e7fe0cdb7bc79dd7cef91112e447418/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3463a3777ee8209a745216ab6489f4737e7fe0cdb7bc79dd7cef91112e447418/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3463a3777ee8209a745216ab6489f4737e7fe0cdb7bc79dd7cef91112e447418/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220725170207-14919",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220725170207-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220725170207-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220725170207-14919",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220725170207-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f1335303166461ff045a145630cde83ce4fa4487a583a6085de7b6a8ce55b56",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52035"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52036"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52038"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52039"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3f1335303166",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220725170207-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bfc479d3fe89",
	                        "default-k8s-different-port-20220725170207-14919"
	                    ],
	                    "NetworkID": "b7d51d72d4084e46d1ce7d0a3c5830a3c9dedc45e1eb06e45db4cc80ba01ee49",
	                    "EndpointID": "bdfb6e273d034cae79962c231a3e35e7623a545732ab78153823864aeeddd680",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220725170207-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220725170207-14919 logs -n 25: (2.991339827s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220725164610-14919            | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220725170207-14919      | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | disable-driver-mounts-20220725170207-14919        |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 17:03:18
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:03:18.398620   32282 out.go:296] Setting OutFile to fd 1 ...
	I0725 17:03:18.398790   32282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:03:18.398795   32282 out.go:309] Setting ErrFile to fd 2...
	I0725 17:03:18.398799   32282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:03:18.398900   32282 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 17:03:18.399360   32282 out.go:303] Setting JSON to false
	I0725 17:03:18.414237   32282 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10721,"bootTime":1658783077,"procs":370,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 17:03:18.414340   32282 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 17:03:18.435902   32282 out.go:177] * [default-k8s-different-port-20220725170207-14919] minikube v1.26.0 on Darwin 12.5
	I0725 17:03:18.479357   32282 notify.go:193] Checking for updates...
	I0725 17:03:18.501037   32282 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 17:03:18.522225   32282 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:03:18.542987   32282 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 17:03:18.568949   32282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:03:18.590061   32282 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 17:03:18.611319   32282 config.go:178] Loaded profile config "default-k8s-different-port-20220725170207-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:03:18.611660   32282 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 17:03:18.680533   32282 docker.go:137] docker version: linux-20.10.17
	I0725 17:03:18.680654   32282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:03:18.813347   32282 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:03:18.746667635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:03:18.835386   32282 out.go:177] * Using the docker driver based on existing profile
	I0725 17:03:18.857165   32282 start.go:284] selected driver: docker
	I0725 17:03:18.857196   32282 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220725170207-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port
-20220725170207-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:03:18.857354   32282 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:03:18.860300   32282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:03:18.993840   32282 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:03:18.927496197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:03:18.994050   32282 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:03:18.994067   32282 cni.go:95] Creating CNI manager for ""
	I0725 17:03:18.994078   32282 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:03:18.994086   32282 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220725170207-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220725170207-14919 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:03:19.037674   32282 out.go:177] * Starting control plane node default-k8s-different-port-20220725170207-14919 in cluster default-k8s-different-port-20220725170207-14919
	I0725 17:03:19.058862   32282 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 17:03:19.080861   32282 out.go:177] * Pulling base image ...
	I0725 17:03:19.102764   32282 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 17:03:19.102767   32282 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:03:19.102846   32282 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 17:03:19.102867   32282 cache.go:57] Caching tarball of preloaded images
	I0725 17:03:19.103059   32282 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 17:03:19.103080   32282 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 17:03:19.104118   32282 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/config.json ...
	I0725 17:03:19.169074   32282 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 17:03:19.169097   32282 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 17:03:19.169116   32282 cache.go:208] Successfully downloaded all kic artifacts
	I0725 17:03:19.169166   32282 start.go:370] acquiring machines lock for default-k8s-different-port-20220725170207-14919: {Name:mkc494994dcd0861e1ae31a1dc7096d6db767ab9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:03:19.169245   32282 start.go:374] acquired machines lock for "default-k8s-different-port-20220725170207-14919" in 62.803µs
	I0725 17:03:19.169266   32282 start.go:95] Skipping create...Using existing machine configuration
	I0725 17:03:19.169274   32282 fix.go:55] fixHost starting: 
	I0725 17:03:19.169484   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:03:19.236412   32282 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220725170207-14919: state=Stopped err=<nil>
	W0725 17:03:19.236440   32282 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 17:03:19.279905   32282 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220725170207-14919" ...
	I0725 17:03:19.300807   32282 cli_runner.go:164] Run: docker start default-k8s-different-port-20220725170207-14919
	I0725 17:03:19.660989   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:03:19.737746   32282 kic.go:415] container "default-k8s-different-port-20220725170207-14919" state is running.
	I0725 17:03:19.738389   32282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725170207-14919
	I0725 17:03:19.820894   32282 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/config.json ...
	I0725 17:03:19.821365   32282 machine.go:88] provisioning docker machine ...
	I0725 17:03:19.821393   32282 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220725170207-14919"
	I0725 17:03:19.821467   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:19.902729   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:19.902919   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:19.902932   32282 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220725170207-14919 && echo "default-k8s-different-port-20220725170207-14919" | sudo tee /etc/hostname
	I0725 17:03:20.051302   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220725170207-14919
	
	I0725 17:03:20.051384   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:20.133984   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:20.134169   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:20.134187   32282 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220725170207-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220725170207-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220725170207-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:03:20.255723   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:03:20.255742   32282 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 17:03:20.255764   32282 ubuntu.go:177] setting up certificates
	I0725 17:03:20.255778   32282 provision.go:83] configureAuth start
	I0725 17:03:20.255844   32282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725170207-14919
	I0725 17:03:20.332715   32282 provision.go:138] copyHostCerts
	I0725 17:03:20.332843   32282 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 17:03:20.332856   32282 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 17:03:20.332971   32282 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 17:03:20.333180   32282 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 17:03:20.333195   32282 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 17:03:20.333268   32282 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 17:03:20.333428   32282 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 17:03:20.333438   32282 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 17:03:20.333503   32282 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 17:03:20.333621   32282 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220725170207-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220725170207-14919]
	I0725 17:03:20.541243   32282 provision.go:172] copyRemoteCerts
	I0725 17:03:20.541311   32282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:03:20.541372   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:20.617481   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:20.705564   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 17:03:20.722482   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0725 17:03:20.738782   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 17:03:20.755576   32282 provision.go:86] duration metric: configureAuth took 499.765405ms
	I0725 17:03:20.755592   32282 ubuntu.go:193] setting minikube options for container-runtime
	I0725 17:03:20.755750   32282 config.go:178] Loaded profile config "default-k8s-different-port-20220725170207-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:03:20.755808   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:20.830995   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:20.831164   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:20.831210   32282 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 17:03:20.951097   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 17:03:20.951115   32282 ubuntu.go:71] root file system type: overlay
	I0725 17:03:20.951309   32282 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 17:03:20.951387   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.024618   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:21.024821   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:21.024884   32282 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 17:03:21.155736   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 17:03:21.155839   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.227541   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:21.227682   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:21.227695   32282 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 17:03:21.354266   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:03:21.354281   32282 machine.go:91] provisioned docker machine in 1.53289569s
	I0725 17:03:21.354291   32282 start.go:307] post-start starting for "default-k8s-different-port-20220725170207-14919" (driver="docker")
	I0725 17:03:21.354296   32282 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:03:21.354356   32282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:03:21.354400   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.426948   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:21.517501   32282 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:03:21.521141   32282 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 17:03:21.521157   32282 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 17:03:21.521170   32282 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 17:03:21.521175   32282 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 17:03:21.521185   32282 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 17:03:21.521293   32282 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 17:03:21.521444   32282 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 17:03:21.521598   32282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:03:21.528610   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:03:21.545213   32282 start.go:310] post-start completed in 190.905338ms
	I0725 17:03:21.545294   32282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:03:21.545348   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.620487   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:21.707652   32282 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 17:03:21.711908   32282 fix.go:57] fixHost completed within 2.54261192s
	I0725 17:03:21.711922   32282 start.go:82] releasing machines lock for "default-k8s-different-port-20220725170207-14919", held for 2.542651816s
	I0725 17:03:21.712030   32282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.783744   32282 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 17:03:21.783766   32282 ssh_runner.go:195] Run: systemctl --version
	I0725 17:03:21.783825   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.783835   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.864979   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:21.867701   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:22.174208   32282 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 17:03:22.183747   32282 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 17:03:22.183817   32282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 17:03:22.195539   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:03:22.207824   32282 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 17:03:22.278879   32282 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 17:03:22.357570   32282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:03:22.429159   32282 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 17:03:22.670243   32282 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 17:03:22.752104   32282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:03:22.823163   32282 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 17:03:22.832836   32282 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 17:03:22.832902   32282 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 17:03:22.836798   32282 start.go:471] Will wait 60s for crictl version
	I0725 17:03:22.836845   32282 ssh_runner.go:195] Run: sudo crictl version
	I0725 17:03:22.944849   32282 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 17:03:22.944914   32282 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:03:22.982540   32282 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:03:23.060719   32282 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 17:03:23.060802   32282 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220725170207-14919 dig +short host.docker.internal
	I0725 17:03:23.195221   32282 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 17:03:23.195326   32282 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 17:03:23.199549   32282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:03:23.208845   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:23.281936   32282 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:03:23.282000   32282 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:03:23.311751   32282 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 17:03:23.311770   32282 docker.go:542] Images already preloaded, skipping extraction
	I0725 17:03:23.311882   32282 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:03:23.342978   32282 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 17:03:23.343005   32282 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:03:23.343075   32282 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 17:03:23.420584   32282 cni.go:95] Creating CNI manager for ""
	I0725 17:03:23.420610   32282 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:03:23.420649   32282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 17:03:23.420677   32282 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220725170207-14919 NodeName:default-k8s-different-port-20220725170207-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 17:03:23.420907   32282 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220725170207-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:03:23.421052   32282 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220725170207-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220725170207-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0725 17:03:23.421164   32282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 17:03:23.429352   32282 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:03:23.429453   32282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 17:03:23.436296   32282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0725 17:03:23.448451   32282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:03:23.460992   32282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0725 17:03:23.473391   32282 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 17:03:23.477165   32282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:03:23.485940   32282 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919 for IP: 192.168.76.2
	I0725 17:03:23.486066   32282 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 17:03:23.486117   32282 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 17:03:23.486207   32282 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.key
	I0725 17:03:23.486265   32282 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/apiserver.key.31bdca25
	I0725 17:03:23.486316   32282 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/proxy-client.key
	I0725 17:03:23.486558   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 17:03:23.486595   32282 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 17:03:23.486611   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:03:23.486643   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 17:03:23.486674   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:03:23.486702   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 17:03:23.486772   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:03:23.487303   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 17:03:23.503454   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:03:23.520011   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:03:23.537165   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 17:03:23.554147   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:03:23.570824   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 17:03:23.587313   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:03:23.603920   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 17:03:23.620531   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 17:03:23.636833   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:03:23.653178   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 17:03:23.669811   32282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:03:23.682263   32282 ssh_runner.go:195] Run: openssl version
	I0725 17:03:23.687689   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 17:03:23.695403   32282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 17:03:23.699377   32282 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 17:03:23.699426   32282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 17:03:23.704505   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:03:23.711906   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:03:23.719372   32282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:03:23.723269   32282 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:03:23.723312   32282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:03:23.728183   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:03:23.735020   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 17:03:23.742823   32282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 17:03:23.746729   32282 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 17:03:23.746770   32282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 17:03:23.752095   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 17:03:23.759557   32282 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220725170207-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220725170207-1491
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:03:23.759654   32282 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:03:23.789860   32282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:03:23.798844   32282 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 17:03:23.798865   32282 kubeadm.go:626] restartCluster start
	I0725 17:03:23.798914   32282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 17:03:23.805922   32282 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:23.805995   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:23.881115   32282 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220725170207-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:03:23.881296   32282 kubeconfig.go:127] "default-k8s-different-port-20220725170207-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 17:03:23.881674   32282 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:03:23.883043   32282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 17:03:23.890753   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:23.890800   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:23.898743   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.100946   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.101128   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.111686   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.300954   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.301101   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.312078   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.500896   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.501021   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.511437   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.698908   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.699049   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.708955   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.901025   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.901131   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.911323   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.100360   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.100509   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.110569   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.299245   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.299326   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.308070   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.498874   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.498973   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.507857   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.700920   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.701117   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.712050   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.899157   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.899331   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.909594   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.099972   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.100063   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.108638   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.300126   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.300314   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.310749   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.500925   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.501114   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.511744   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.701088   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.701178   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.711707   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.898920   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.898991   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.908527   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.908541   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.908592   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.916592   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.916603   32282 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 17:03:26.916611   32282 kubeadm.go:1092] stopping kube-system containers ...
	I0725 17:03:26.916663   32282 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:03:26.946063   32282 docker.go:443] Stopping containers: [460c737fb5c8 7dfc0a2f5ad2 3ddcdd4781cc 15a9648c1f31 a603961d60f7 8dfde1eae5a6 9614d18626d9 f6c22e58eaf1 24ceb47ae5d5 a5b9836487ca 8c7a64e2d2ad c426ac6b2ca9 0e78d93d0bec 77f8eb70f520 851691122a54]
	I0725 17:03:26.946136   32282 ssh_runner.go:195] Run: docker stop 460c737fb5c8 7dfc0a2f5ad2 3ddcdd4781cc 15a9648c1f31 a603961d60f7 8dfde1eae5a6 9614d18626d9 f6c22e58eaf1 24ceb47ae5d5 a5b9836487ca 8c7a64e2d2ad c426ac6b2ca9 0e78d93d0bec 77f8eb70f520 851691122a54
	I0725 17:03:26.976006   32282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 17:03:26.986197   32282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:03:26.993600   32282 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 26 00:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 26 00:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul 26 00:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 26 00:02 /etc/kubernetes/scheduler.conf
	
	I0725 17:03:26.993655   32282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 17:03:27.000850   32282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 17:03:27.008086   32282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 17:03:27.014946   32282 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:27.014996   32282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:03:27.021534   32282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 17:03:27.028508   32282 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:27.028558   32282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:03:27.035367   32282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:03:27.042609   32282 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 17:03:27.042635   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.090419   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.564628   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.741648   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.791546   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.845332   32282 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:03:27.845379   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:03:28.387202   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:03:28.887323   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:03:28.901564   32282 api_server.go:71] duration metric: took 1.05623128s to wait for apiserver process to appear ...
	I0725 17:03:28.901589   32282 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:03:28.901603   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:28.903165   32282 api_server.go:256] stopped: https://127.0.0.1:52039/healthz: Get "https://127.0.0.1:52039/healthz": EOF
	I0725 17:03:29.403524   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:32.289677   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 17:03:32.289704   32282 api_server.go:102] status: https://127.0.0.1:52039/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 17:03:32.403418   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:32.411600   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:03:32.411617   32282 api_server.go:102] status: https://127.0.0.1:52039/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:03:32.903860   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:32.910669   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:03:32.910683   32282 api_server.go:102] status: https://127.0.0.1:52039/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:03:33.403662   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:33.426797   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 200:
	ok
	I0725 17:03:33.436037   32282 api_server.go:140] control plane version: v1.24.3
	I0725 17:03:33.436052   32282 api_server.go:130] duration metric: took 4.534425639s to wait for apiserver health ...
	I0725 17:03:33.436063   32282 cni.go:95] Creating CNI manager for ""
	I0725 17:03:33.436068   32282 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:03:33.436080   32282 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:03:33.443472   32282 system_pods.go:59] 8 kube-system pods found
	I0725 17:03:33.443487   32282 system_pods.go:61] "coredns-6d4b75cb6d-f7p5d" [3423c7ba-da51-4cb1-9aec-3c1ee5b1b92c] Running
	I0725 17:03:33.443492   32282 system_pods.go:61] "etcd-default-k8s-different-port-20220725170207-14919" [4c91d508-4646-4d69-8026-9ae476440264] Running
	I0725 17:03:33.443503   32282 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725170207-14919" [c3c48eee-9b58-4889-a7db-163f78fd88d6] Running
	I0725 17:03:33.443508   32282 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725170207-14919" [5a671731-430c-4941-bf19-9bea3d023f8b] Running
	I0725 17:03:33.443511   32282 system_pods.go:61] "kube-proxy-n6lz2" [50cf4d7a-6f85-4ba0-a947-090776ce1fd7] Running
	I0725 17:03:33.443520   32282 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725170207-14919" [ca129696-9a54-4c8b-b03c-4f58ba0f1a67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 17:03:33.443526   32282 system_pods.go:61] "metrics-server-5c6f97fb75-tqkzw" [99176717-e2b4-422b-a1bc-f92f4930475f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:03:33.443530   32282 system_pods.go:61] "storage-provisioner" [c93c9cf7-7f23-4a6c-8525-3efc9682a3f8] Running
	I0725 17:03:33.443534   32282 system_pods.go:74] duration metric: took 7.449755ms to wait for pod list to return data ...
	I0725 17:03:33.443540   32282 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:03:33.448228   32282 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:03:33.448243   32282 node_conditions.go:123] node cpu capacity is 6
	I0725 17:03:33.448252   32282 node_conditions.go:105] duration metric: took 4.708263ms to run NodePressure ...
	I0725 17:03:33.448262   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:33.639277   32282 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 17:03:33.644159   32282 kubeadm.go:777] kubelet initialised
	I0725 17:03:33.644171   32282 kubeadm.go:778] duration metric: took 4.880105ms waiting for restarted kubelet to initialise ...
	I0725 17:03:33.644179   32282 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:03:33.650087   32282 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-f7p5d" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.681001   32282 pod_ready.go:92] pod "coredns-6d4b75cb6d-f7p5d" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:33.681015   32282 pod_ready.go:81] duration metric: took 30.914152ms waiting for pod "coredns-6d4b75cb6d-f7p5d" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.681029   32282 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.688431   32282 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:33.688442   32282 pod_ready.go:81] duration metric: took 7.406559ms waiting for pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.688450   32282 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.694800   32282 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:33.694811   32282 pod_ready.go:81] duration metric: took 6.353884ms waiting for pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.694820   32282 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.839473   32282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:33.839483   32282 pod_ready.go:81] duration metric: took 144.656898ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.839491   32282 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n6lz2" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:34.240440   32282 pod_ready.go:92] pod "kube-proxy-n6lz2" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:34.240451   32282 pod_ready.go:81] duration metric: took 400.953001ms waiting for pod "kube-proxy-n6lz2" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:34.240457   32282 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:36.645431   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:38.645716   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:41.146585   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:43.645553   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:45.647862   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:46.646107   32282 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:46.646121   32282 pod_ready.go:81] duration metric: took 12.405570626s waiting for pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:46.646128   32282 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:48.658119   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:51.159248   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:53.658009   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:55.658405   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:57.659341   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:00.158320   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:02.160189   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:04.655701   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:06.658014   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:08.658448   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:11.156846   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:13.158454   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:15.161702   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:17.658133   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:19.659112   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:22.158974   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:24.658896   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:27.158740   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:29.658078   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:31.659194   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:34.157188   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:36.160389   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:38.658439   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:40.658711   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:43.159279   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:45.659302   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:48.164534   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:50.666897   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:53.168104   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:55.173298   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:57.175306   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:59.677449   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:01.678888   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:04.178055   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:06.679377   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:09.180686   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:11.681711   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:14.182826   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:16.184761   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:18.186156   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:20.684255   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:23.216626   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:25.685145   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:27.686502   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:30.185002   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:32.187055   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:34.685719   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:36.686579   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:38.687370   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:40.687953   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:43.187831   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:45.686989   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:48.185933   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:50.188026   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:52.188263   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:54.685677   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:56.688279   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:59.186488   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:01.188265   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:03.688280   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:06.188586   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:08.687788   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:10.688365   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:13.186389   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:15.688237   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:18.187221   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:20.685154   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:22.686724   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:24.688812   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:27.187448   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:29.188586   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:31.686893   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:33.687564   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:36.186653   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:38.685617   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:40.687607   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:43.185337   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:45.185670   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:47.188746   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:49.686415   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:52.185644   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:54.189158   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:56.685507   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:58.686875   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:00.688735   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:03.185887   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:05.188149   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:07.188198   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:09.189322   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:11.686074   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:13.688936   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:16.185151   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:18.188447   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:20.685703   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:22.687237   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:25.185781   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:27.189678   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:29.687862   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:31.689830   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:34.186536   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:36.686290   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:39.189176   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:41.685783   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:43.686387   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:45.687491   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:46.682139   32282 pod_ready.go:81] duration metric: took 4m0.006221113s waiting for pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace to be "Ready" ...
	E0725 17:07:46.682162   32282 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 17:07:46.682188   32282 pod_ready.go:38] duration metric: took 4m13.008154197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:07:46.682223   32282 kubeadm.go:630] restartCluster took 4m22.853438172s
	W0725 17:07:46.682348   32282 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 17:07:46.682379   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 17:07:49.066637   32282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.384228373s)
	I0725 17:07:49.066700   32282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:07:49.076329   32282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:07:49.084011   32282 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 17:07:49.084059   32282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:07:49.091138   32282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 17:07:49.091163   32282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 17:07:49.381315   32282 out.go:204]   - Generating certificates and keys ...
	I0725 17:07:50.268835   32282 out.go:204]   - Booting up control plane ...
	I0725 17:07:56.824723   32282 out.go:204]   - Configuring RBAC rules ...
	I0725 17:07:57.230214   32282 cni.go:95] Creating CNI manager for ""
	I0725 17:07:57.230227   32282 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:07:57.230247   32282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:07:57.230330   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:57.230340   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b minikube.k8s.io/name=default-k8s-different-port-20220725170207-14919 minikube.k8s.io/updated_at=2022_07_25T17_07_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:57.417494   32282 ops.go:34] apiserver oom_adj: -16
	I0725 17:07:57.417529   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:57.983364   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:58.483299   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:58.981431   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:59.483403   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:59.983289   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:00.481653   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:00.982912   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:01.481331   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:01.983453   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:02.482296   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:02.983457   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:03.483388   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:03.982307   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:04.482289   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:04.981781   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:05.481565   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:05.983450   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:06.481445   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:06.983518   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:07.481653   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:07.981421   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:08.483443   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:08.981758   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:09.481412   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:09.983515   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:10.040917   32282 kubeadm.go:1045] duration metric: took 12.810557419s to wait for elevateKubeSystemPrivileges.
	I0725 17:08:10.040934   32282 kubeadm.go:397] StartCluster complete in 4m46.251310503s
	I0725 17:08:10.040953   32282 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:08:10.041037   32282 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:08:10.041877   32282 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:08:10.556735   32282 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220725170207-14919" rescaled to 1
	I0725 17:08:10.556780   32282 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 17:08:10.556788   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:08:10.580048   32282 out.go:177] * Verifying Kubernetes components...
	I0725 17:08:10.556817   32282 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 17:08:10.557011   32282 config.go:178] Loaded profile config "default-k8s-different-port-20220725170207-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:08:10.580138   32282 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.580144   32282 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.643431   32282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.643459   32282 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.643462   32282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:08:10.580139   32282 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220725170207-14919"
	W0725 17:08:10.643475   32282 addons.go:162] addon storage-provisioner should already be in state true
	I0725 17:08:10.643501   32282 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.580149   32282 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220725170207-14919"
	W0725 17:08:10.643519   32282 addons.go:162] addon dashboard should already be in state true
	I0725 17:08:10.643541   32282 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220725170207-14919"
	W0725 17:08:10.643559   32282 addons.go:162] addon metrics-server should already be in state true
	I0725 17:08:10.612554   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 17:08:10.643557   32282 host.go:66] Checking if "default-k8s-different-port-20220725170207-14919" exists ...
	I0725 17:08:10.643622   32282 host.go:66] Checking if "default-k8s-different-port-20220725170207-14919" exists ...
	I0725 17:08:10.643623   32282 host.go:66] Checking if "default-k8s-different-port-20220725170207-14919" exists ...
	I0725 17:08:10.643989   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.645747   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.645876   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.647007   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.667515   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.797151   32282 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 17:08:10.809685   32282 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220725170207-14919"
	W0725 17:08:10.818001   32282 addons.go:162] addon default-storageclass should already be in state true
	I0725 17:08:10.818032   32282 host.go:66] Checking if "default-k8s-different-port-20220725170207-14919" exists ...
	I0725 17:08:10.818047   32282 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:08:10.818063   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:08:10.818184   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.839002   32282 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 17:08:10.818682   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.852843   32282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220725170207-14919" to be "Ready" ...
	I0725 17:08:10.902072   32282 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 17:08:10.881175   32282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:08:10.911471   32282 node_ready.go:49] node "default-k8s-different-port-20220725170207-14919" has status "Ready":"True"
	I0725 17:08:10.922958   32282 node_ready.go:38] duration metric: took 41.750111ms waiting for node "default-k8s-different-port-20220725170207-14919" to be "Ready" ...
	I0725 17:08:10.922989   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 17:08:10.944125   32282 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:08:10.944169   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 17:08:10.944220   32282 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:08:10.944267   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:08:10.944371   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.944378   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.958189   32282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:10.969574   32282 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:08:10.969589   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:08:10.969651   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.972238   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:08:11.049136   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:08:11.053800   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:08:11.065231   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:08:11.227295   32282 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:08:11.227308   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 17:08:11.315837   32282 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:08:11.315857   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:08:11.345645   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 17:08:11.345659   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 17:08:11.347521   32282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:08:11.410971   32282 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:08:11.410998   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:08:11.439450   32282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:08:11.519490   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 17:08:11.519500   32282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:08:11.519506   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 17:08:11.545065   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 17:08:11.545079   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 17:08:11.633699   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 17:08:11.633718   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 17:08:11.731693   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 17:08:11.731726   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 17:08:11.832865   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 17:08:11.832882   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 17:08:11.922558   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 17:08:11.922581   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 17:08:12.118310   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 17:08:12.118328   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 17:08:12.142076   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:08:12.142094   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 17:08:12.238652   32282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:08:12.345112   32282 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.70151288s)
	I0725 17:08:12.345136   32282 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 17:08:12.526521   32282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.178971022s)
	I0725 17:08:12.526559   32282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.087084319s)
	I0725 17:08:12.547236   32282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.02771033s)
	I0725 17:08:12.547255   32282 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:13.027003   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:13.541658   32282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.30297446s)
	I0725 17:08:13.562622   32282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 17:08:13.599967   32282 addons.go:414] enableAddons completed in 3.043132037s
	I0725 17:08:15.523023   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:17.524595   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:20.022798   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:22.027026   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:22.522021   32282 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-7drh6" not found
	I0725 17:08:22.522042   32282 pod_ready.go:81] duration metric: took 11.563750542s waiting for pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace to be "Ready" ...
	E0725 17:08:22.522051   32282 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-7drh6" not found
	I0725 17:08:22.522057   32282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-nl4gs" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.528567   32282 pod_ready.go:92] pod "coredns-6d4b75cb6d-nl4gs" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.528577   32282 pod_ready.go:81] duration metric: took 6.513848ms waiting for pod "coredns-6d4b75cb6d-nl4gs" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.528584   32282 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.532904   32282 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.532914   32282 pod_ready.go:81] duration metric: took 4.325267ms waiting for pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.532920   32282 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.538148   32282 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.538160   32282 pod_ready.go:81] duration metric: took 5.234314ms waiting for pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.538170   32282 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.544217   32282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.544228   32282 pod_ready.go:81] duration metric: took 6.051505ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.544236   32282 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ldpkt" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.721843   32282 pod_ready.go:92] pod "kube-proxy-ldpkt" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.721855   32282 pod_ready.go:81] duration metric: took 177.611738ms waiting for pod "kube-proxy-ldpkt" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.721862   32282 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:23.122391   32282 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:23.122402   32282 pod_ready.go:81] duration metric: took 400.532943ms waiting for pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:23.122408   32282 pod_ready.go:38] duration metric: took 12.178180322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:08:23.122419   32282 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:08:23.122469   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:08:23.134403   32282 api_server.go:71] duration metric: took 12.577516126s to wait for apiserver process to appear ...
	I0725 17:08:23.134418   32282 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:08:23.134427   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:08:23.140121   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 200:
	ok
	I0725 17:08:23.141443   32282 api_server.go:140] control plane version: v1.24.3
	I0725 17:08:23.141454   32282 api_server.go:130] duration metric: took 7.030999ms to wait for apiserver health ...
	I0725 17:08:23.141460   32282 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:08:23.324909   32282 system_pods.go:59] 8 kube-system pods found
	I0725 17:08:23.324923   32282 system_pods.go:61] "coredns-6d4b75cb6d-nl4gs" [819703f3-8ea8-4843-983b-e8b99ff546e5] Running
	I0725 17:08:23.324928   32282 system_pods.go:61] "etcd-default-k8s-different-port-20220725170207-14919" [ffd29c4d-5ed1-4436-bc10-be18c1a81047] Running
	I0725 17:08:23.324931   32282 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725170207-14919" [00e17921-35c5-4ecd-b77e-08c8031d7e8d] Running
	I0725 17:08:23.324936   32282 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725170207-14919" [673e7662-b7b1-4f6f-a44b-fdc60090a08e] Running
	I0725 17:08:23.324940   32282 system_pods.go:61] "kube-proxy-ldpkt" [e86e20e1-ea9d-459e-9592-2c03c22354cc] Running
	I0725 17:08:23.324943   32282 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725170207-14919" [e147213e-dc5a-4ffb-8341-c446556df341] Running
	I0725 17:08:23.324952   32282 system_pods.go:61] "metrics-server-5c6f97fb75-2zfng" [ba4f819c-dca0-4e2b-a3a6-e411f7978c4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:08:23.324957   32282 system_pods.go:61] "storage-provisioner" [071ecba8-dbb4-4650-b0c1-80e4dd492eac] Running
	I0725 17:08:23.324961   32282 system_pods.go:74] duration metric: took 183.496776ms to wait for pod list to return data ...
	I0725 17:08:23.324967   32282 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:08:23.522764   32282 default_sa.go:45] found service account: "default"
	I0725 17:08:23.522776   32282 default_sa.go:55] duration metric: took 197.803608ms for default service account to be created ...
	I0725 17:08:23.522781   32282 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:08:23.725299   32282 system_pods.go:86] 8 kube-system pods found
	I0725 17:08:23.725314   32282 system_pods.go:89] "coredns-6d4b75cb6d-nl4gs" [819703f3-8ea8-4843-983b-e8b99ff546e5] Running
	I0725 17:08:23.725319   32282 system_pods.go:89] "etcd-default-k8s-different-port-20220725170207-14919" [ffd29c4d-5ed1-4436-bc10-be18c1a81047] Running
	I0725 17:08:23.725323   32282 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220725170207-14919" [00e17921-35c5-4ecd-b77e-08c8031d7e8d] Running
	I0725 17:08:23.725334   32282 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220725170207-14919" [673e7662-b7b1-4f6f-a44b-fdc60090a08e] Running
	I0725 17:08:23.725339   32282 system_pods.go:89] "kube-proxy-ldpkt" [e86e20e1-ea9d-459e-9592-2c03c22354cc] Running
	I0725 17:08:23.725342   32282 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220725170207-14919" [e147213e-dc5a-4ffb-8341-c446556df341] Running
	I0725 17:08:23.725347   32282 system_pods.go:89] "metrics-server-5c6f97fb75-2zfng" [ba4f819c-dca0-4e2b-a3a6-e411f7978c4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:08:23.725352   32282 system_pods.go:89] "storage-provisioner" [071ecba8-dbb4-4650-b0c1-80e4dd492eac] Running
	I0725 17:08:23.725356   32282 system_pods.go:126] duration metric: took 202.570581ms to wait for k8s-apps to be running ...
	I0725 17:08:23.725361   32282 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:08:23.725412   32282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:08:23.736305   32282 system_svc.go:56] duration metric: took 10.937434ms WaitForService to wait for kubelet.
	I0725 17:08:23.736322   32282 kubeadm.go:572] duration metric: took 13.179432691s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 17:08:23.736339   32282 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:08:23.922304   32282 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:08:23.922320   32282 node_conditions.go:123] node cpu capacity is 6
	I0725 17:08:23.922337   32282 node_conditions.go:105] duration metric: took 185.991676ms to run NodePressure ...
	I0725 17:08:23.922346   32282 start.go:216] waiting for startup goroutines ...
	I0725 17:08:23.955796   32282 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 17:08:23.977293   32282 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220725170207-14919" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-07-26 00:03:19 UTC, end at Tue 2022-07-26 00:09:15 UTC. --
	Jul 26 00:07:47 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:47.883813466Z" level=info msg="ignoring event" container=48747481c8e8c265e3c12d0767182b44e0ecf14a5016f7e3a674d889ba5b10ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:47 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:47.952562987Z" level=info msg="ignoring event" container=4389a2c26c83c33f9c55ad61229eddd7053d69ab43fdb142d9487d07a1bb4335 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.065883405Z" level=info msg="ignoring event" container=eaadbf4ee8eec7112ab405edd6396caea8f0a98edfa4a76133a1753c5d673a04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.136125460Z" level=info msg="ignoring event" container=c25663c015c3b3f85c6a9534dd2cb4e81df6eb58c946205cad89e15e477f45c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.206447177Z" level=info msg="ignoring event" container=54a87eb3288da5e4af9c17c012204c098fc6c318c4dd2dc2ef149318920f9907 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.269133638Z" level=info msg="ignoring event" container=ac160bd3a06505b66a1b5e679d0b72e8f76659eae65f555ce2913eaec4f7b56b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.349184304Z" level=info msg="ignoring event" container=b58960a1e6593d0a3b5c3f93e1f5ea37a914ddf5e34da68e10f23ee270a6f3d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.423504661Z" level=info msg="ignoring event" container=2d85c694fe8665031ee34ca1cb9b2a7dac35ad9e638d2329905dfc278f5311ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.558853744Z" level=info msg="ignoring event" container=8dc46aa8bfddab623176d1f5534bcf271f658c461b237807dd4806036990f7d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.629447492Z" level=info msg="ignoring event" container=28f4232e847cd3b81e35b7ac96b00c5925546a66158401923410e46337d8a710 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.737950196Z" level=info msg="ignoring event" container=2a7b19ee0102eece8749d090e16501efc58d83338d4be27161a251cd852747fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:13 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:13.239201065Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:13 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:13.239335628Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:13 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:13.240915879Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:15 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:15.407457124Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 26 00:08:15 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:15.724590429Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 26 00:08:19 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:19.099082643Z" level=info msg="ignoring event" container=beb2705c26eddcc481eb566ce2bbfd89a2e61ac77c1b43526187432f9591dce0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:19 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:19.121772322Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 26 00:08:19 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:19.387870388Z" level=info msg="ignoring event" container=4c48cd1884a8bd6eb7eabe0c6d8f1179a52b4710cb212bec4d1503bc5468930e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:21 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:21.863571732Z" level=info msg="ignoring event" container=5c168829c3102ca1116c5858b2d67384798eaf9d690be266fadfb9e483f685ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:21 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:21.952703774Z" level=info msg="ignoring event" container=2bbf3981543e723d15550a0f5d79bfc570ffafdf78ff96a502a0cf3f3fa94e74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:26 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:26.194585948Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:26 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:26.194630907Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:26 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:26.196036315Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:37 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:37.774924906Z" level=info msg="ignoring event" container=f5b14fa2d927367c59b6d733b9cfa34751f4283a51e6eb48c2d50af11e104d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	f5b14fa2d9273       a90209bb39e3d                                                                                    38 seconds ago       Exited              dashboard-metrics-scraper   2                   635746301dfba
	230dc0495ef09       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   50 seconds ago       Running             kubernetes-dashboard        0                   f9504988602d9
	325507f96e421       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   3d1f01e3f3d89
	24d062bb9936c       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   2962a1e6a05ea
	b08c57265ec6d       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   27c83fe3d28c8
	e73d3ba5e1f3e       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   5c53da789ea0d
	d02df4d7fde32       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   f02d682a374fd
	7aa32347e9be0       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   c3c844541400a
	841f900516955       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   1d7566b7a2115
	
	* 
	* ==> coredns [24d062bb9936] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220725170207-14919
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220725170207-14919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b
	                    minikube.k8s.io/name=default-k8s-different-port-20220725170207-14919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T17_07_57_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Jul 2022 00:07:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220725170207-14919
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Jul 2022 00:09:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Jul 2022 00:09:13 +0000   Tue, 26 Jul 2022 00:07:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Jul 2022 00:09:13 +0000   Tue, 26 Jul 2022 00:07:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Jul 2022 00:09:13 +0000   Tue, 26 Jul 2022 00:07:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 26 Jul 2022 00:09:13 +0000   Tue, 26 Jul 2022 00:09:13 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220725170207-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                bb35f1ff-e757-402b-bebd-06d9bce5d3fb
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-nl4gs                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     65s
	  kube-system                 etcd-default-k8s-different-port-20220725170207-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220725170207-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220725170207-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-ldpkt                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220725170207-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 metrics-server-5c6f97fb75-2zfng                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-tnpqb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-lxsld                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 64s   kube-proxy       
	  Normal  Starting                 78s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s   kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s   kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s   kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasSufficientPID
	  Normal  NodeReady                78s   kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeReady
	  Normal  RegisteredNode           66s   node-controller  Node default-k8s-different-port-20220725170207-14919 event: Registered Node default-k8s-different-port-20220725170207-14919 in Controller
	  Normal  Starting                 2s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s    kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s    kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s    kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             2s    kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [d02df4d7fde3] <==
	* {"level":"info","ts":"2022-07-26T00:07:51.640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-26T00:07:51.640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-26T00:07:51.641Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-26T00:07:51.642Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:07:51.642Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:07:51.642Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-26T00:07:51.643Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220725170207-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:07:52.335Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:07:52.335Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:07:52.335Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:09:16 up  1:15,  0 users,  load average: 0.84, 0.82, 0.95
	Linux default-k8s-different-port-20220725170207-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [7aa32347e9be] <==
	* I0726 00:07:56.429207       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0726 00:07:57.039546       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0726 00:07:57.045377       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0726 00:07:57.055584       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0726 00:07:57.236494       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0726 00:08:09.715743       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0726 00:08:10.214068       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0726 00:08:11.638706       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0726 00:08:12.536759       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.80.2]
	W0726 00:08:13.418308       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:08:13.418366       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:08:13.418376       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0726 00:08:13.418254       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:08:13.418402       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:08:13.419809       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0726 00:08:13.524719       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.109.149.201]
	I0726 00:08:13.538360       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.162.27]
	W0726 00:09:13.375825       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:09:13.376256       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:09:13.376291       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0726 00:09:13.377997       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:09:13.378036       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:09:13.378043       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [841f90051695] <==
	* E0726 00:08:12.416069       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0726 00:08:12.424102       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-2zfng"
	I0726 00:08:13.332476       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0726 00:08:13.340819       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0726 00:08:13.342257       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:08:13.345820       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.349123       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:08:13.353153       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:08:13.353505       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.353580       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.360416       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:08:13.360655       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.360712       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:08:13.361982       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.364551       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.364641       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.414965       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.415026       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.416401       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.416437       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:08:13.461747       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-tnpqb"
	I0726 00:08:13.461793       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-lxsld"
	E0726 00:08:39.357116       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0726 00:09:12.804890       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0726 00:09:12.809543       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b08c57265ec6] <==
	* I0726 00:08:11.442484       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0726 00:08:11.442552       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0726 00:08:11.442610       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0726 00:08:11.631007       1 server_others.go:206] "Using iptables Proxier"
	I0726 00:08:11.631053       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0726 00:08:11.631063       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0726 00:08:11.631093       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0726 00:08:11.631110       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:08:11.631331       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:08:11.631522       1 server.go:661] "Version info" version="v1.24.3"
	I0726 00:08:11.631529       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:08:11.632673       1 config.go:317] "Starting service config controller"
	I0726 00:08:11.632680       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0726 00:08:11.632693       1 config.go:226] "Starting endpoint slice config controller"
	I0726 00:08:11.632696       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0726 00:08:11.633899       1 config.go:444] "Starting node config controller"
	I0726 00:08:11.633930       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0726 00:08:11.733469       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0726 00:08:11.733545       1 shared_informer.go:262] Caches are synced for service config
	I0726 00:08:11.734418       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e73d3ba5e1f3] <==
	* W0726 00:07:54.348205       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0726 00:07:54.348219       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0726 00:07:54.348569       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0726 00:07:54.348628       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0726 00:07:54.348952       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0726 00:07:54.348984       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0726 00:07:54.349258       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:07:54.349388       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:07:54.349640       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0726 00:07:54.349672       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0726 00:07:54.350245       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0726 00:07:54.350464       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0726 00:07:54.350764       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0726 00:07:54.350921       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0726 00:07:55.227042       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:07:55.227156       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:07:55.264707       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0726 00:07:55.264761       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0726 00:07:55.275458       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0726 00:07:55.275529       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0726 00:07:55.288705       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0726 00:07:55.288753       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0726 00:07:55.463979       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0726 00:07:55.464048       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0726 00:07:55.845861       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-07-26 00:03:19 UTC, end at Tue 2022-07-26 00:09:17 UTC. --
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.253995    9834 topology_manager.go:200] "Topology Admit Handler"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284107    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlctx\" (UniqueName: \"kubernetes.io/projected/54a93336-f701-437f-87d7-2f4fa0355c1d-kube-api-access-zlctx\") pod \"dashboard-metrics-scraper-dffd48c4c-tnpqb\" (UID: \"54a93336-f701-437f-87d7-2f4fa0355c1d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tnpqb"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284170    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqcdt\" (UniqueName: \"kubernetes.io/projected/071ecba8-dbb4-4650-b0c1-80e4dd492eac-kube-api-access-fqcdt\") pod \"storage-provisioner\" (UID: \"071ecba8-dbb4-4650-b0c1-80e4dd492eac\") " pod="kube-system/storage-provisioner"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284191    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e86e20e1-ea9d-459e-9592-2c03c22354cc-lib-modules\") pod \"kube-proxy-ldpkt\" (UID: \"e86e20e1-ea9d-459e-9592-2c03c22354cc\") " pod="kube-system/kube-proxy-ldpkt"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284207    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/819703f3-8ea8-4843-983b-e8b99ff546e5-config-volume\") pod \"coredns-6d4b75cb6d-nl4gs\" (UID: \"819703f3-8ea8-4843-983b-e8b99ff546e5\") " pod="kube-system/coredns-6d4b75cb6d-nl4gs"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284221    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e86e20e1-ea9d-459e-9592-2c03c22354cc-kube-proxy\") pod \"kube-proxy-ldpkt\" (UID: \"e86e20e1-ea9d-459e-9592-2c03c22354cc\") " pod="kube-system/kube-proxy-ldpkt"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284235    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba4f819c-dca0-4e2b-a3a6-e411f7978c4e-tmp-dir\") pod \"metrics-server-5c6f97fb75-2zfng\" (UID: \"ba4f819c-dca0-4e2b-a3a6-e411f7978c4e\") " pod="kube-system/metrics-server-5c6f97fb75-2zfng"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284247    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/071ecba8-dbb4-4650-b0c1-80e4dd492eac-tmp\") pod \"storage-provisioner\" (UID: \"071ecba8-dbb4-4650-b0c1-80e4dd492eac\") " pod="kube-system/storage-provisioner"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284261    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgp8h\" (UniqueName: \"kubernetes.io/projected/e86e20e1-ea9d-459e-9592-2c03c22354cc-kube-api-access-qgp8h\") pod \"kube-proxy-ldpkt\" (UID: \"e86e20e1-ea9d-459e-9592-2c03c22354cc\") " pod="kube-system/kube-proxy-ldpkt"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284289    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47fb3e0a-7080-462a-910d-d9820f6f9eb2-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-lxsld\" (UID: \"47fb3e0a-7080-462a-910d-d9820f6f9eb2\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-lxsld"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284313    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ccxl\" (UniqueName: \"kubernetes.io/projected/819703f3-8ea8-4843-983b-e8b99ff546e5-kube-api-access-6ccxl\") pod \"coredns-6d4b75cb6d-nl4gs\" (UID: \"819703f3-8ea8-4843-983b-e8b99ff546e5\") " pod="kube-system/coredns-6d4b75cb6d-nl4gs"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284329    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26kk\" (UniqueName: \"kubernetes.io/projected/47fb3e0a-7080-462a-910d-d9820f6f9eb2-kube-api-access-v26kk\") pod \"kubernetes-dashboard-5fd5574d9f-lxsld\" (UID: \"47fb3e0a-7080-462a-910d-d9820f6f9eb2\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-lxsld"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284343    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e86e20e1-ea9d-459e-9592-2c03c22354cc-xtables-lock\") pod \"kube-proxy-ldpkt\" (UID: \"e86e20e1-ea9d-459e-9592-2c03c22354cc\") " pod="kube-system/kube-proxy-ldpkt"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284358    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/54a93336-f701-437f-87d7-2f4fa0355c1d-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-tnpqb\" (UID: \"54a93336-f701-437f-87d7-2f4fa0355c1d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tnpqb"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284373    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzbx\" (UniqueName: \"kubernetes.io/projected/ba4f819c-dca0-4e2b-a3a6-e411f7978c4e-kube-api-access-5nzbx\") pod \"metrics-server-5c6f97fb75-2zfng\" (UID: \"ba4f819c-dca0-4e2b-a3a6-e411f7978c4e\") " pod="kube-system/metrics-server-5c6f97fb75-2zfng"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284386    9834 reconciler.go:157] "Reconciler: start to sync state"
	Jul 26 00:09:15 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:15.451177    9834 request.go:601] Waited for 1.06539654s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/serviceaccounts/coredns/token
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.490307    9834 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.490381    9834 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.490498    9834 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5nzbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fil
e,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-2zfng_kube-system(ba4f819c-dca0-4e2b-a3a6-e411f7978c4e): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.490530    9834 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-2zfng" podUID=ba4f819c-dca0-4e2b-a3a6-e411f7978c4e
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:16.655563    9834 scope.go:110] "RemoveContainer" containerID="f5b14fa2d927367c59b6d733b9cfa34751f4283a51e6eb48c2d50af11e104d99"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.757930    9834 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220725170207-14919\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220725170207-14919"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.934470    9834 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220725170207-14919\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220725170207-14919"
	Jul 26 00:09:17 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:17.157004    9834 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220725170207-14919\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220725170207-14919"
	
	* 
	* ==> kubernetes-dashboard [230dc0495ef0] <==
	* 2022/07/26 00:08:25 Using namespace: kubernetes-dashboard
	2022/07/26 00:08:25 Using in-cluster config to connect to apiserver
	2022/07/26 00:08:25 Using secret token for csrf signing
	2022/07/26 00:08:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/26 00:08:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/26 00:08:25 Successful initial request to the apiserver, version: v1.24.3
	2022/07/26 00:08:25 Generating JWE encryption key
	2022/07/26 00:08:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/26 00:08:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/26 00:08:25 Initializing JWE encryption key from synchronized object
	2022/07/26 00:08:25 Creating in-cluster Sidecar client
	2022/07/26 00:08:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/26 00:08:25 Serving insecurely on HTTP port: 9090
	2022/07/26 00:09:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/26 00:08:25 Starting overwatch
	
	* 
	* ==> storage-provisioner [325507f96e42] <==
	* I0726 00:08:13.428135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0726 00:08:13.442765       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0726 00:08:13.442813       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0726 00:08:13.471916       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0726 00:08:13.472270       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220725170207-14919_91f05136-07a3-44b4-a7eb-b73612d79f6e!
	I0726 00:08:13.472885       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70a7cd8e-0a7c-4b8e-a4f8-5913815c490d", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220725170207-14919_91f05136-07a3-44b4-a7eb-b73612d79f6e became leader
	I0726 00:08:13.572783       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220725170207-14919_91f05136-07a3-44b4-a7eb-b73612d79f6e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220725170207-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-2zfng
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220725170207-14919 describe pod metrics-server-5c6f97fb75-2zfng
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220725170207-14919 describe pod metrics-server-5c6f97fb75-2zfng: exit status 1 (267.679927ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-2zfng" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220725170207-14919 describe pod metrics-server-5c6f97fb75-2zfng: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220725170207-14919
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220725170207-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016",
	        "Created": "2022-07-26T00:02:15.01401926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-26T00:03:19.660246735Z",
	            "FinishedAt": "2022-07-26T00:03:17.680795745Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016/hostname",
	        "HostsPath": "/var/lib/docker/containers/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016/hosts",
	        "LogPath": "/var/lib/docker/containers/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016/bfc479d3fe893fc336eb1c3d63fc5d364065ad2a684b2dd812fd043ded949016-json.log",
	        "Name": "/default-k8s-different-port-20220725170207-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220725170207-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220725170207-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3463a3777ee8209a745216ab6489f4737e7fe0cdb7bc79dd7cef91112e447418-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3463a3777ee8209a745216ab6489f4737e7fe0cdb7bc79dd7cef91112e447418/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3463a3777ee8209a745216ab6489f4737e7fe0cdb7bc79dd7cef91112e447418/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3463a3777ee8209a745216ab6489f4737e7fe0cdb7bc79dd7cef91112e447418/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220725170207-14919",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220725170207-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220725170207-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220725170207-14919",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220725170207-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f1335303166461ff045a145630cde83ce4fa4487a583a6085de7b6a8ce55b56",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52035"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52036"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52038"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52039"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3f1335303166",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220725170207-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bfc479d3fe89",
	                        "default-k8s-different-port-20220725170207-14919"
	                    ],
	                    "NetworkID": "b7d51d72d4084e46d1ce7d0a3c5830a3c9dedc45e1eb06e45db4cc80ba01ee49",
	                    "EndpointID": "bdfb6e273d034cae79962c231a3e35e7623a545732ab78153823864aeeddd680",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220725170207-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220725170207-14919 logs -n 25: (2.657995523s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220725164610-14919            | jenkins | v1.26.0 | 25 Jul 22 16:51 PDT |                     |
	|         | old-k8s-version-20220725164610-14919              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725164719-14919                 | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:54 PDT |
	|         | no-preload-20220725164719-14919                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:54 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:55 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:55 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 16:56 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220725170207-14919      | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | disable-driver-mounts-20220725170207-14919        |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 17:03:18
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:03:18.398620   32282 out.go:296] Setting OutFile to fd 1 ...
	I0725 17:03:18.398790   32282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:03:18.398795   32282 out.go:309] Setting ErrFile to fd 2...
	I0725 17:03:18.398799   32282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:03:18.398900   32282 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 17:03:18.399360   32282 out.go:303] Setting JSON to false
	I0725 17:03:18.414237   32282 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10721,"bootTime":1658783077,"procs":370,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 17:03:18.414340   32282 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 17:03:18.435902   32282 out.go:177] * [default-k8s-different-port-20220725170207-14919] minikube v1.26.0 on Darwin 12.5
	I0725 17:03:18.479357   32282 notify.go:193] Checking for updates...
	I0725 17:03:18.501037   32282 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 17:03:18.522225   32282 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:03:18.542987   32282 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 17:03:18.568949   32282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:03:18.590061   32282 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 17:03:18.611319   32282 config.go:178] Loaded profile config "default-k8s-different-port-20220725170207-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:03:18.611660   32282 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 17:03:18.680533   32282 docker.go:137] docker version: linux-20.10.17
	I0725 17:03:18.680654   32282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:03:18.813347   32282 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:03:18.746667635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:03:18.835386   32282 out.go:177] * Using the docker driver based on existing profile
	I0725 17:03:18.857165   32282 start.go:284] selected driver: docker
	I0725 17:03:18.857196   32282 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220725170207-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port
-20220725170207-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:03:18.857354   32282 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:03:18.860300   32282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:03:18.993840   32282 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:03:18.927496197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:03:18.994050   32282 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:03:18.994067   32282 cni.go:95] Creating CNI manager for ""
	I0725 17:03:18.994078   32282 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:03:18.994086   32282 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220725170207-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220725170207-14919 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:03:19.037674   32282 out.go:177] * Starting control plane node default-k8s-different-port-20220725170207-14919 in cluster default-k8s-different-port-20220725170207-14919
	I0725 17:03:19.058862   32282 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 17:03:19.080861   32282 out.go:177] * Pulling base image ...
	I0725 17:03:19.102764   32282 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 17:03:19.102767   32282 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:03:19.102846   32282 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 17:03:19.102867   32282 cache.go:57] Caching tarball of preloaded images
	I0725 17:03:19.103059   32282 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 17:03:19.103080   32282 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 17:03:19.104118   32282 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/config.json ...
	I0725 17:03:19.169074   32282 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 17:03:19.169097   32282 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 17:03:19.169116   32282 cache.go:208] Successfully downloaded all kic artifacts
	I0725 17:03:19.169166   32282 start.go:370] acquiring machines lock for default-k8s-different-port-20220725170207-14919: {Name:mkc494994dcd0861e1ae31a1dc7096d6db767ab9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:03:19.169245   32282 start.go:374] acquired machines lock for "default-k8s-different-port-20220725170207-14919" in 62.803µs
	I0725 17:03:19.169266   32282 start.go:95] Skipping create...Using existing machine configuration
	I0725 17:03:19.169274   32282 fix.go:55] fixHost starting: 
	I0725 17:03:19.169484   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:03:19.236412   32282 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220725170207-14919: state=Stopped err=<nil>
	W0725 17:03:19.236440   32282 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 17:03:19.279905   32282 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220725170207-14919" ...
	I0725 17:03:19.300807   32282 cli_runner.go:164] Run: docker start default-k8s-different-port-20220725170207-14919
	I0725 17:03:19.660989   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:03:19.737746   32282 kic.go:415] container "default-k8s-different-port-20220725170207-14919" state is running.
	I0725 17:03:19.738389   32282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725170207-14919
	I0725 17:03:19.820894   32282 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/config.json ...
	I0725 17:03:19.821365   32282 machine.go:88] provisioning docker machine ...
	I0725 17:03:19.821393   32282 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220725170207-14919"
	I0725 17:03:19.821467   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:19.902729   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:19.902919   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:19.902932   32282 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220725170207-14919 && echo "default-k8s-different-port-20220725170207-14919" | sudo tee /etc/hostname
	I0725 17:03:20.051302   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220725170207-14919
	
	I0725 17:03:20.051384   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:20.133984   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:20.134169   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:20.134187   32282 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220725170207-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220725170207-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220725170207-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:03:20.255723   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:03:20.255742   32282 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 17:03:20.255764   32282 ubuntu.go:177] setting up certificates
	I0725 17:03:20.255778   32282 provision.go:83] configureAuth start
	I0725 17:03:20.255844   32282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725170207-14919
	I0725 17:03:20.332715   32282 provision.go:138] copyHostCerts
	I0725 17:03:20.332843   32282 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 17:03:20.332856   32282 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 17:03:20.332971   32282 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 17:03:20.333180   32282 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 17:03:20.333195   32282 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 17:03:20.333268   32282 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 17:03:20.333428   32282 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 17:03:20.333438   32282 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 17:03:20.333503   32282 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 17:03:20.333621   32282 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220725170207-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220725170207-14919]
	I0725 17:03:20.541243   32282 provision.go:172] copyRemoteCerts
	I0725 17:03:20.541311   32282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:03:20.541372   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:20.617481   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:20.705564   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 17:03:20.722482   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0725 17:03:20.738782   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 17:03:20.755576   32282 provision.go:86] duration metric: configureAuth took 499.765405ms
	I0725 17:03:20.755592   32282 ubuntu.go:193] setting minikube options for container-runtime
	I0725 17:03:20.755750   32282 config.go:178] Loaded profile config "default-k8s-different-port-20220725170207-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:03:20.755808   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:20.830995   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:20.831164   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:20.831210   32282 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 17:03:20.951097   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 17:03:20.951115   32282 ubuntu.go:71] root file system type: overlay
	I0725 17:03:20.951309   32282 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 17:03:20.951387   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.024618   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:21.024821   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:21.024884   32282 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 17:03:21.155736   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 17:03:21.155839   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.227541   32282 main.go:134] libmachine: Using SSH client type: native
	I0725 17:03:21.227682   32282 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52035 <nil> <nil>}
	I0725 17:03:21.227695   32282 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 17:03:21.354266   32282 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:03:21.354281   32282 machine.go:91] provisioned docker machine in 1.53289569s
	I0725 17:03:21.354291   32282 start.go:307] post-start starting for "default-k8s-different-port-20220725170207-14919" (driver="docker")
	I0725 17:03:21.354296   32282 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:03:21.354356   32282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:03:21.354400   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.426948   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:21.517501   32282 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:03:21.521141   32282 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 17:03:21.521157   32282 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 17:03:21.521170   32282 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 17:03:21.521175   32282 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 17:03:21.521185   32282 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 17:03:21.521293   32282 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 17:03:21.521444   32282 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 17:03:21.521598   32282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:03:21.528610   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:03:21.545213   32282 start.go:310] post-start completed in 190.905338ms
	I0725 17:03:21.545294   32282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:03:21.545348   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.620487   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:21.707652   32282 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 17:03:21.711908   32282 fix.go:57] fixHost completed within 2.54261192s
	I0725 17:03:21.711922   32282 start.go:82] releasing machines lock for "default-k8s-different-port-20220725170207-14919", held for 2.542651816s
	I0725 17:03:21.712030   32282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.783744   32282 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 17:03:21.783766   32282 ssh_runner.go:195] Run: systemctl --version
	I0725 17:03:21.783825   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.783835   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:21.864979   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:21.867701   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:03:22.174208   32282 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 17:03:22.183747   32282 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 17:03:22.183817   32282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 17:03:22.195539   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:03:22.207824   32282 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 17:03:22.278879   32282 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 17:03:22.357570   32282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:03:22.429159   32282 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 17:03:22.670243   32282 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 17:03:22.752104   32282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:03:22.823163   32282 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 17:03:22.832836   32282 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 17:03:22.832902   32282 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 17:03:22.836798   32282 start.go:471] Will wait 60s for crictl version
	I0725 17:03:22.836845   32282 ssh_runner.go:195] Run: sudo crictl version
	I0725 17:03:22.944849   32282 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 17:03:22.944914   32282 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:03:22.982540   32282 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:03:23.060719   32282 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 17:03:23.060802   32282 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220725170207-14919 dig +short host.docker.internal
	I0725 17:03:23.195221   32282 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 17:03:23.195326   32282 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 17:03:23.199549   32282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:03:23.208845   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:23.281936   32282 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:03:23.282000   32282 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:03:23.311751   32282 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 17:03:23.311770   32282 docker.go:542] Images already preloaded, skipping extraction
	I0725 17:03:23.311882   32282 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:03:23.342978   32282 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 17:03:23.343005   32282 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:03:23.343075   32282 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 17:03:23.420584   32282 cni.go:95] Creating CNI manager for ""
	I0725 17:03:23.420610   32282 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:03:23.420649   32282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 17:03:23.420677   32282 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220725170207-14919 NodeName:default-k8s-different-port-20220725170207-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 17:03:23.420907   32282 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220725170207-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:03:23.421052   32282 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220725170207-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220725170207-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0725 17:03:23.421164   32282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 17:03:23.429352   32282 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:03:23.429453   32282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 17:03:23.436296   32282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0725 17:03:23.448451   32282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:03:23.460992   32282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0725 17:03:23.473391   32282 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 17:03:23.477165   32282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:03:23.485940   32282 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919 for IP: 192.168.76.2
	I0725 17:03:23.486066   32282 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 17:03:23.486117   32282 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 17:03:23.486207   32282 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.key
	I0725 17:03:23.486265   32282 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/apiserver.key.31bdca25
	I0725 17:03:23.486316   32282 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/proxy-client.key
	I0725 17:03:23.486558   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 17:03:23.486595   32282 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 17:03:23.486611   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:03:23.486643   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 17:03:23.486674   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:03:23.486702   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 17:03:23.486772   32282 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:03:23.487303   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 17:03:23.503454   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:03:23.520011   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:03:23.537165   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 17:03:23.554147   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:03:23.570824   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 17:03:23.587313   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:03:23.603920   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 17:03:23.620531   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 17:03:23.636833   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:03:23.653178   32282 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 17:03:23.669811   32282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:03:23.682263   32282 ssh_runner.go:195] Run: openssl version
	I0725 17:03:23.687689   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 17:03:23.695403   32282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 17:03:23.699377   32282 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 17:03:23.699426   32282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 17:03:23.704505   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:03:23.711906   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:03:23.719372   32282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:03:23.723269   32282 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:03:23.723312   32282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:03:23.728183   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:03:23.735020   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 17:03:23.742823   32282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 17:03:23.746729   32282 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 17:03:23.746770   32282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 17:03:23.752095   32282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 17:03:23.759557   32282 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220725170207-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220725170207-1491
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:03:23.759654   32282 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:03:23.789860   32282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:03:23.798844   32282 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 17:03:23.798865   32282 kubeadm.go:626] restartCluster start
	I0725 17:03:23.798914   32282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 17:03:23.805922   32282 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:23.805995   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:03:23.881115   32282 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220725170207-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:03:23.881296   32282 kubeconfig.go:127] "default-k8s-different-port-20220725170207-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 17:03:23.881674   32282 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:03:23.883043   32282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 17:03:23.890753   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:23.890800   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:23.898743   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.100946   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.101128   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.111686   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.300954   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.301101   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.312078   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.500896   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.501021   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.511437   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.698908   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.699049   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.708955   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:24.901025   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:24.901131   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:24.911323   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.100360   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.100509   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.110569   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.299245   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.299326   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.308070   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.498874   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.498973   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.507857   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.700920   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.701117   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.712050   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:25.899157   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:25.899331   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:25.909594   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.099972   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.100063   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.108638   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.300126   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.300314   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.310749   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.500925   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.501114   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.511744   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.701088   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.701178   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.711707   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.898920   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.898991   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.908527   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.908541   32282 api_server.go:165] Checking apiserver status ...
	I0725 17:03:26.908592   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:03:26.916592   32282 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:26.916603   32282 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 17:03:26.916611   32282 kubeadm.go:1092] stopping kube-system containers ...
	I0725 17:03:26.916663   32282 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:03:26.946063   32282 docker.go:443] Stopping containers: [460c737fb5c8 7dfc0a2f5ad2 3ddcdd4781cc 15a9648c1f31 a603961d60f7 8dfde1eae5a6 9614d18626d9 f6c22e58eaf1 24ceb47ae5d5 a5b9836487ca 8c7a64e2d2ad c426ac6b2ca9 0e78d93d0bec 77f8eb70f520 851691122a54]
	I0725 17:03:26.946136   32282 ssh_runner.go:195] Run: docker stop 460c737fb5c8 7dfc0a2f5ad2 3ddcdd4781cc 15a9648c1f31 a603961d60f7 8dfde1eae5a6 9614d18626d9 f6c22e58eaf1 24ceb47ae5d5 a5b9836487ca 8c7a64e2d2ad c426ac6b2ca9 0e78d93d0bec 77f8eb70f520 851691122a54
	I0725 17:03:26.976006   32282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 17:03:26.986197   32282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:03:26.993600   32282 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 26 00:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 26 00:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul 26 00:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 26 00:02 /etc/kubernetes/scheduler.conf
	
	I0725 17:03:26.993655   32282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 17:03:27.000850   32282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 17:03:27.008086   32282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 17:03:27.014946   32282 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:27.014996   32282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:03:27.021534   32282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 17:03:27.028508   32282 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:03:27.028558   32282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:03:27.035367   32282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:03:27.042609   32282 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 17:03:27.042635   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.090419   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.564628   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.741648   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.791546   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:27.845332   32282 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:03:27.845379   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:03:28.387202   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:03:28.887323   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:03:28.901564   32282 api_server.go:71] duration metric: took 1.05623128s to wait for apiserver process to appear ...
	I0725 17:03:28.901589   32282 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:03:28.901603   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:28.903165   32282 api_server.go:256] stopped: https://127.0.0.1:52039/healthz: Get "https://127.0.0.1:52039/healthz": EOF
	I0725 17:03:29.403524   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:32.289677   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 17:03:32.289704   32282 api_server.go:102] status: https://127.0.0.1:52039/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 17:03:32.403418   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:32.411600   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:03:32.411617   32282 api_server.go:102] status: https://127.0.0.1:52039/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:03:32.903860   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:32.910669   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:03:32.910683   32282 api_server.go:102] status: https://127.0.0.1:52039/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:03:33.403662   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:03:33.426797   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 200:
	ok
	I0725 17:03:33.436037   32282 api_server.go:140] control plane version: v1.24.3
	I0725 17:03:33.436052   32282 api_server.go:130] duration metric: took 4.534425639s to wait for apiserver health ...
	I0725 17:03:33.436063   32282 cni.go:95] Creating CNI manager for ""
	I0725 17:03:33.436068   32282 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:03:33.436080   32282 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:03:33.443472   32282 system_pods.go:59] 8 kube-system pods found
	I0725 17:03:33.443487   32282 system_pods.go:61] "coredns-6d4b75cb6d-f7p5d" [3423c7ba-da51-4cb1-9aec-3c1ee5b1b92c] Running
	I0725 17:03:33.443492   32282 system_pods.go:61] "etcd-default-k8s-different-port-20220725170207-14919" [4c91d508-4646-4d69-8026-9ae476440264] Running
	I0725 17:03:33.443503   32282 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725170207-14919" [c3c48eee-9b58-4889-a7db-163f78fd88d6] Running
	I0725 17:03:33.443508   32282 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725170207-14919" [5a671731-430c-4941-bf19-9bea3d023f8b] Running
	I0725 17:03:33.443511   32282 system_pods.go:61] "kube-proxy-n6lz2" [50cf4d7a-6f85-4ba0-a947-090776ce1fd7] Running
	I0725 17:03:33.443520   32282 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725170207-14919" [ca129696-9a54-4c8b-b03c-4f58ba0f1a67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 17:03:33.443526   32282 system_pods.go:61] "metrics-server-5c6f97fb75-tqkzw" [99176717-e2b4-422b-a1bc-f92f4930475f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:03:33.443530   32282 system_pods.go:61] "storage-provisioner" [c93c9cf7-7f23-4a6c-8525-3efc9682a3f8] Running
	I0725 17:03:33.443534   32282 system_pods.go:74] duration metric: took 7.449755ms to wait for pod list to return data ...
	I0725 17:03:33.443540   32282 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:03:33.448228   32282 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:03:33.448243   32282 node_conditions.go:123] node cpu capacity is 6
	I0725 17:03:33.448252   32282 node_conditions.go:105] duration metric: took 4.708263ms to run NodePressure ...
	I0725 17:03:33.448262   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:03:33.639277   32282 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 17:03:33.644159   32282 kubeadm.go:777] kubelet initialised
	I0725 17:03:33.644171   32282 kubeadm.go:778] duration metric: took 4.880105ms waiting for restarted kubelet to initialise ...
	I0725 17:03:33.644179   32282 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:03:33.650087   32282 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-f7p5d" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.681001   32282 pod_ready.go:92] pod "coredns-6d4b75cb6d-f7p5d" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:33.681015   32282 pod_ready.go:81] duration metric: took 30.914152ms waiting for pod "coredns-6d4b75cb6d-f7p5d" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.681029   32282 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.688431   32282 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:33.688442   32282 pod_ready.go:81] duration metric: took 7.406559ms waiting for pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.688450   32282 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.694800   32282 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:33.694811   32282 pod_ready.go:81] duration metric: took 6.353884ms waiting for pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.694820   32282 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.839473   32282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:33.839483   32282 pod_ready.go:81] duration metric: took 144.656898ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:33.839491   32282 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n6lz2" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:34.240440   32282 pod_ready.go:92] pod "kube-proxy-n6lz2" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:34.240451   32282 pod_ready.go:81] duration metric: took 400.953001ms waiting for pod "kube-proxy-n6lz2" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:34.240457   32282 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:36.645431   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:38.645716   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:41.146585   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:43.645553   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:45.647862   32282 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:46.646107   32282 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:03:46.646121   32282 pod_ready.go:81] duration metric: took 12.405570626s waiting for pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:46.646128   32282 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace to be "Ready" ...
	I0725 17:03:48.658119   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:51.159248   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:53.658009   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:55.658405   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:03:57.659341   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:00.158320   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:02.160189   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:04.655701   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:06.658014   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:08.658448   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:11.156846   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:13.158454   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:15.161702   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:17.658133   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:19.659112   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:22.158974   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:24.658896   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:27.158740   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:29.658078   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:31.659194   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:34.157188   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:36.160389   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:38.658439   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:40.658711   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:43.159279   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:45.659302   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:48.164534   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:50.666897   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:53.168104   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:55.173298   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:57.175306   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:04:59.677449   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:01.678888   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:04.178055   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:06.679377   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:09.180686   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:11.681711   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:14.182826   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:16.184761   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:18.186156   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:20.684255   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:23.216626   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:25.685145   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:27.686502   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:30.185002   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:32.187055   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:34.685719   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:36.686579   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:38.687370   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:40.687953   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:43.187831   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:45.686989   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:48.185933   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:50.188026   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:52.188263   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:54.685677   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:56.688279   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:05:59.186488   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:01.188265   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:03.688280   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:06.188586   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:08.687788   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:10.688365   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:13.186389   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:15.688237   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:18.187221   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:20.685154   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:22.686724   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:24.688812   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:27.187448   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:29.188586   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:31.686893   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:33.687564   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:36.186653   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:38.685617   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:40.687607   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:43.185337   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:45.185670   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:47.188746   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:49.686415   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:52.185644   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:54.189158   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:56.685507   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:06:58.686875   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:00.688735   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:03.185887   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:05.188149   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:07.188198   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:09.189322   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:11.686074   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:13.688936   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:16.185151   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:18.188447   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:20.685703   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:22.687237   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:25.185781   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:27.189678   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:29.687862   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:31.689830   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:34.186536   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:36.686290   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:39.189176   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:41.685783   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:43.686387   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:45.687491   32282 pod_ready.go:102] pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace has status "Ready":"False"
	I0725 17:07:46.682139   32282 pod_ready.go:81] duration metric: took 4m0.006221113s waiting for pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace to be "Ready" ...
	E0725 17:07:46.682162   32282 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-tqkzw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 17:07:46.682188   32282 pod_ready.go:38] duration metric: took 4m13.008154197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:07:46.682223   32282 kubeadm.go:630] restartCluster took 4m22.853438172s
	W0725 17:07:46.682348   32282 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 17:07:46.682379   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 17:07:49.066637   32282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.384228373s)
	I0725 17:07:49.066700   32282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:07:49.076329   32282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:07:49.084011   32282 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 17:07:49.084059   32282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:07:49.091138   32282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 17:07:49.091163   32282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 17:07:49.381315   32282 out.go:204]   - Generating certificates and keys ...
	I0725 17:07:50.268835   32282 out.go:204]   - Booting up control plane ...
	I0725 17:07:56.824723   32282 out.go:204]   - Configuring RBAC rules ...
	I0725 17:07:57.230214   32282 cni.go:95] Creating CNI manager for ""
	I0725 17:07:57.230227   32282 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:07:57.230247   32282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:07:57.230330   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:57.230340   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b minikube.k8s.io/name=default-k8s-different-port-20220725170207-14919 minikube.k8s.io/updated_at=2022_07_25T17_07_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:57.417494   32282 ops.go:34] apiserver oom_adj: -16
	I0725 17:07:57.417529   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:57.983364   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:58.483299   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:58.981431   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:59.483403   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:07:59.983289   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:00.481653   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:00.982912   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:01.481331   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:01.983453   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:02.482296   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:02.983457   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:03.483388   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:03.982307   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:04.482289   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:04.981781   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:05.481565   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:05.983450   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:06.481445   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:06.983518   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:07.481653   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:07.981421   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:08.483443   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:08.981758   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:09.481412   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:09.983515   32282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:08:10.040917   32282 kubeadm.go:1045] duration metric: took 12.810557419s to wait for elevateKubeSystemPrivileges.
	I0725 17:08:10.040934   32282 kubeadm.go:397] StartCluster complete in 4m46.251310503s
	I0725 17:08:10.040953   32282 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:08:10.041037   32282 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:08:10.041877   32282 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:08:10.556735   32282 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220725170207-14919" rescaled to 1
	I0725 17:08:10.556780   32282 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 17:08:10.556788   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:08:10.580048   32282 out.go:177] * Verifying Kubernetes components...
	I0725 17:08:10.556817   32282 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 17:08:10.557011   32282 config.go:178] Loaded profile config "default-k8s-different-port-20220725170207-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:08:10.580138   32282 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.580144   32282 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.643431   32282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.643459   32282 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.643462   32282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:08:10.580139   32282 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220725170207-14919"
	W0725 17:08:10.643475   32282 addons.go:162] addon storage-provisioner should already be in state true
	I0725 17:08:10.643501   32282 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:10.580149   32282 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220725170207-14919"
	W0725 17:08:10.643519   32282 addons.go:162] addon dashboard should already be in state true
	I0725 17:08:10.643541   32282 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220725170207-14919"
	W0725 17:08:10.643559   32282 addons.go:162] addon metrics-server should already be in state true
	I0725 17:08:10.612554   32282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 17:08:10.643557   32282 host.go:66] Checking if "default-k8s-different-port-20220725170207-14919" exists ...
	I0725 17:08:10.643622   32282 host.go:66] Checking if "default-k8s-different-port-20220725170207-14919" exists ...
	I0725 17:08:10.643623   32282 host.go:66] Checking if "default-k8s-different-port-20220725170207-14919" exists ...
	I0725 17:08:10.643989   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.645747   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.645876   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.647007   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.667515   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.797151   32282 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 17:08:10.809685   32282 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220725170207-14919"
	W0725 17:08:10.818001   32282 addons.go:162] addon default-storageclass should already be in state true
	I0725 17:08:10.818032   32282 host.go:66] Checking if "default-k8s-different-port-20220725170207-14919" exists ...
	I0725 17:08:10.818047   32282 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:08:10.818063   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:08:10.818184   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.839002   32282 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 17:08:10.818682   32282 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725170207-14919 --format={{.State.Status}}
	I0725 17:08:10.852843   32282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220725170207-14919" to be "Ready" ...
	I0725 17:08:10.902072   32282 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 17:08:10.881175   32282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:08:10.911471   32282 node_ready.go:49] node "default-k8s-different-port-20220725170207-14919" has status "Ready":"True"
	I0725 17:08:10.922958   32282 node_ready.go:38] duration metric: took 41.750111ms waiting for node "default-k8s-different-port-20220725170207-14919" to be "Ready" ...
	I0725 17:08:10.922989   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 17:08:10.944125   32282 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:08:10.944169   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 17:08:10.944220   32282 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:08:10.944267   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:08:10.944371   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.944378   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.958189   32282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:10.969574   32282 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:08:10.969589   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:08:10.969651   32282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725170207-14919
	I0725 17:08:10.972238   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:08:11.049136   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:08:11.053800   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:08:11.065231   32282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52035 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/default-k8s-different-port-20220725170207-14919/id_rsa Username:docker}
	I0725 17:08:11.227295   32282 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:08:11.227308   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 17:08:11.315837   32282 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:08:11.315857   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:08:11.345645   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 17:08:11.345659   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 17:08:11.347521   32282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:08:11.410971   32282 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:08:11.410998   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:08:11.439450   32282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:08:11.519490   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 17:08:11.519500   32282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:08:11.519506   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 17:08:11.545065   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 17:08:11.545079   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 17:08:11.633699   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 17:08:11.633718   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 17:08:11.731693   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 17:08:11.731726   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 17:08:11.832865   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 17:08:11.832882   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 17:08:11.922558   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 17:08:11.922581   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 17:08:12.118310   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 17:08:12.118328   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 17:08:12.142076   32282 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:08:12.142094   32282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 17:08:12.238652   32282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:08:12.345112   32282 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.70151288s)
	I0725 17:08:12.345136   32282 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 17:08:12.526521   32282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.178971022s)
	I0725 17:08:12.526559   32282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.087084319s)
	I0725 17:08:12.547236   32282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.02771033s)
	I0725 17:08:12.547255   32282 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220725170207-14919"
	I0725 17:08:13.027003   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:13.541658   32282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.30297446s)
	I0725 17:08:13.562622   32282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 17:08:13.599967   32282 addons.go:414] enableAddons completed in 3.043132037s
	I0725 17:08:15.523023   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:17.524595   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:20.022798   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:22.027026   32282 pod_ready.go:102] pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace has status "Ready":"False"
	I0725 17:08:22.522021   32282 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-7drh6" not found
	I0725 17:08:22.522042   32282 pod_ready.go:81] duration metric: took 11.563750542s waiting for pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace to be "Ready" ...
	E0725 17:08:22.522051   32282 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-7drh6" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-7drh6" not found
	I0725 17:08:22.522057   32282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-nl4gs" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.528567   32282 pod_ready.go:92] pod "coredns-6d4b75cb6d-nl4gs" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.528577   32282 pod_ready.go:81] duration metric: took 6.513848ms waiting for pod "coredns-6d4b75cb6d-nl4gs" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.528584   32282 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.532904   32282 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.532914   32282 pod_ready.go:81] duration metric: took 4.325267ms waiting for pod "etcd-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.532920   32282 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.538148   32282 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.538160   32282 pod_ready.go:81] duration metric: took 5.234314ms waiting for pod "kube-apiserver-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.538170   32282 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.544217   32282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.544228   32282 pod_ready.go:81] duration metric: took 6.051505ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.544236   32282 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ldpkt" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.721843   32282 pod_ready.go:92] pod "kube-proxy-ldpkt" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:22.721855   32282 pod_ready.go:81] duration metric: took 177.611738ms waiting for pod "kube-proxy-ldpkt" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:22.721862   32282 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:23.122391   32282 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace has status "Ready":"True"
	I0725 17:08:23.122402   32282 pod_ready.go:81] duration metric: took 400.532943ms waiting for pod "kube-scheduler-default-k8s-different-port-20220725170207-14919" in "kube-system" namespace to be "Ready" ...
	I0725 17:08:23.122408   32282 pod_ready.go:38] duration metric: took 12.178180322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:08:23.122419   32282 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:08:23.122469   32282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:08:23.134403   32282 api_server.go:71] duration metric: took 12.577516126s to wait for apiserver process to appear ...
	I0725 17:08:23.134418   32282 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:08:23.134427   32282 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52039/healthz ...
	I0725 17:08:23.140121   32282 api_server.go:266] https://127.0.0.1:52039/healthz returned 200:
	ok
	I0725 17:08:23.141443   32282 api_server.go:140] control plane version: v1.24.3
	I0725 17:08:23.141454   32282 api_server.go:130] duration metric: took 7.030999ms to wait for apiserver health ...
	I0725 17:08:23.141460   32282 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:08:23.324909   32282 system_pods.go:59] 8 kube-system pods found
	I0725 17:08:23.324923   32282 system_pods.go:61] "coredns-6d4b75cb6d-nl4gs" [819703f3-8ea8-4843-983b-e8b99ff546e5] Running
	I0725 17:08:23.324928   32282 system_pods.go:61] "etcd-default-k8s-different-port-20220725170207-14919" [ffd29c4d-5ed1-4436-bc10-be18c1a81047] Running
	I0725 17:08:23.324931   32282 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725170207-14919" [00e17921-35c5-4ecd-b77e-08c8031d7e8d] Running
	I0725 17:08:23.324936   32282 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725170207-14919" [673e7662-b7b1-4f6f-a44b-fdc60090a08e] Running
	I0725 17:08:23.324940   32282 system_pods.go:61] "kube-proxy-ldpkt" [e86e20e1-ea9d-459e-9592-2c03c22354cc] Running
	I0725 17:08:23.324943   32282 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725170207-14919" [e147213e-dc5a-4ffb-8341-c446556df341] Running
	I0725 17:08:23.324952   32282 system_pods.go:61] "metrics-server-5c6f97fb75-2zfng" [ba4f819c-dca0-4e2b-a3a6-e411f7978c4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:08:23.324957   32282 system_pods.go:61] "storage-provisioner" [071ecba8-dbb4-4650-b0c1-80e4dd492eac] Running
	I0725 17:08:23.324961   32282 system_pods.go:74] duration metric: took 183.496776ms to wait for pod list to return data ...
	I0725 17:08:23.324967   32282 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:08:23.522764   32282 default_sa.go:45] found service account: "default"
	I0725 17:08:23.522776   32282 default_sa.go:55] duration metric: took 197.803608ms for default service account to be created ...
	I0725 17:08:23.522781   32282 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:08:23.725299   32282 system_pods.go:86] 8 kube-system pods found
	I0725 17:08:23.725314   32282 system_pods.go:89] "coredns-6d4b75cb6d-nl4gs" [819703f3-8ea8-4843-983b-e8b99ff546e5] Running
	I0725 17:08:23.725319   32282 system_pods.go:89] "etcd-default-k8s-different-port-20220725170207-14919" [ffd29c4d-5ed1-4436-bc10-be18c1a81047] Running
	I0725 17:08:23.725323   32282 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220725170207-14919" [00e17921-35c5-4ecd-b77e-08c8031d7e8d] Running
	I0725 17:08:23.725334   32282 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220725170207-14919" [673e7662-b7b1-4f6f-a44b-fdc60090a08e] Running
	I0725 17:08:23.725339   32282 system_pods.go:89] "kube-proxy-ldpkt" [e86e20e1-ea9d-459e-9592-2c03c22354cc] Running
	I0725 17:08:23.725342   32282 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220725170207-14919" [e147213e-dc5a-4ffb-8341-c446556df341] Running
	I0725 17:08:23.725347   32282 system_pods.go:89] "metrics-server-5c6f97fb75-2zfng" [ba4f819c-dca0-4e2b-a3a6-e411f7978c4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:08:23.725352   32282 system_pods.go:89] "storage-provisioner" [071ecba8-dbb4-4650-b0c1-80e4dd492eac] Running
	I0725 17:08:23.725356   32282 system_pods.go:126] duration metric: took 202.570581ms to wait for k8s-apps to be running ...
	I0725 17:08:23.725361   32282 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:08:23.725412   32282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:08:23.736305   32282 system_svc.go:56] duration metric: took 10.937434ms WaitForService to wait for kubelet.
	I0725 17:08:23.736322   32282 kubeadm.go:572] duration metric: took 13.179432691s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 17:08:23.736339   32282 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:08:23.922304   32282 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:08:23.922320   32282 node_conditions.go:123] node cpu capacity is 6
	I0725 17:08:23.922337   32282 node_conditions.go:105] duration metric: took 185.991676ms to run NodePressure ...
	I0725 17:08:23.922346   32282 start.go:216] waiting for startup goroutines ...
	I0725 17:08:23.955796   32282 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 17:08:23.977293   32282 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220725170207-14919" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-07-26 00:03:19 UTC, end at Tue 2022-07-26 00:09:19 UTC. --
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.206447177Z" level=info msg="ignoring event" container=54a87eb3288da5e4af9c17c012204c098fc6c318c4dd2dc2ef149318920f9907 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.269133638Z" level=info msg="ignoring event" container=ac160bd3a06505b66a1b5e679d0b72e8f76659eae65f555ce2913eaec4f7b56b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.349184304Z" level=info msg="ignoring event" container=b58960a1e6593d0a3b5c3f93e1f5ea37a914ddf5e34da68e10f23ee270a6f3d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.423504661Z" level=info msg="ignoring event" container=2d85c694fe8665031ee34ca1cb9b2a7dac35ad9e638d2329905dfc278f5311ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.558853744Z" level=info msg="ignoring event" container=8dc46aa8bfddab623176d1f5534bcf271f658c461b237807dd4806036990f7d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.629447492Z" level=info msg="ignoring event" container=28f4232e847cd3b81e35b7ac96b00c5925546a66158401923410e46337d8a710 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:07:48 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:07:48.737950196Z" level=info msg="ignoring event" container=2a7b19ee0102eece8749d090e16501efc58d83338d4be27161a251cd852747fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:13 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:13.239201065Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:13 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:13.239335628Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:13 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:13.240915879Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:15 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:15.407457124Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 26 00:08:15 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:15.724590429Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 26 00:08:19 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:19.099082643Z" level=info msg="ignoring event" container=beb2705c26eddcc481eb566ce2bbfd89a2e61ac77c1b43526187432f9591dce0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:19 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:19.121772322Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 26 00:08:19 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:19.387870388Z" level=info msg="ignoring event" container=4c48cd1884a8bd6eb7eabe0c6d8f1179a52b4710cb212bec4d1503bc5468930e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:21 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:21.863571732Z" level=info msg="ignoring event" container=5c168829c3102ca1116c5858b2d67384798eaf9d690be266fadfb9e483f685ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:21 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:21.952703774Z" level=info msg="ignoring event" container=2bbf3981543e723d15550a0f5d79bfc570ffafdf78ff96a502a0cf3f3fa94e74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:08:26 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:26.194585948Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:26 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:26.194630907Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:26 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:26.196036315Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:08:37 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:08:37.774924906Z" level=info msg="ignoring event" container=f5b14fa2d927367c59b6d733b9cfa34751f4283a51e6eb48c2d50af11e104d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:09:16.423227200Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:09:16.423272590Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:09:16.489569707Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 dockerd[507]: time="2022-07-26T00:09:16.985309899Z" level=info msg="ignoring event" container=cfa80af081f87b34091a53d522387743397a5ab028340e69dddbec63e65053f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	cfa80af081f87       a90209bb39e3d                                                                                    4 seconds ago        Exited              dashboard-metrics-scraper   3                   635746301dfba
	230dc0495ef09       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   55 seconds ago       Running             kubernetes-dashboard        0                   f9504988602d9
	325507f96e421       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   3d1f01e3f3d89
	24d062bb9936c       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   2962a1e6a05ea
	b08c57265ec6d       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   27c83fe3d28c8
	e73d3ba5e1f3e       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   5c53da789ea0d
	d02df4d7fde32       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   f02d682a374fd
	7aa32347e9be0       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   c3c844541400a
	841f900516955       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   1d7566b7a2115
	
	* 
	* ==> coredns [24d062bb9936] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220725170207-14919
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220725170207-14919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b
	                    minikube.k8s.io/name=default-k8s-different-port-20220725170207-14919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T17_07_57_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Jul 2022 00:07:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220725170207-14919
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Jul 2022 00:09:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Jul 2022 00:09:13 +0000   Tue, 26 Jul 2022 00:07:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Jul 2022 00:09:13 +0000   Tue, 26 Jul 2022 00:07:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Jul 2022 00:09:13 +0000   Tue, 26 Jul 2022 00:07:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 26 Jul 2022 00:09:13 +0000   Tue, 26 Jul 2022 00:09:13 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220725170207-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                bb35f1ff-e757-402b-bebd-06d9bce5d3fb
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-nl4gs                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     70s
	  kube-system                 etcd-default-k8s-different-port-20220725170207-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220725170207-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220725170207-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-ldpkt                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220725170207-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 metrics-server-5c6f97fb75-2zfng                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-tnpqb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-lxsld                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 68s   kube-proxy       
	  Normal  Starting                 83s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  83s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s   kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s   kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s   kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasSufficientPID
	  Normal  NodeReady                83s   kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeReady
	  Normal  RegisteredNode           71s   node-controller  Node default-k8s-different-port-20220725170207-14919 event: Registered Node default-k8s-different-port-20220725170207-14919 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             7s    kubelet          Node default-k8s-different-port-20220725170207-14919 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [d02df4d7fde3] <==
	* {"level":"info","ts":"2022-07-26T00:07:51.640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-26T00:07:51.640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-26T00:07:51.641Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-26T00:07:51.642Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:07:51.642Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:07:51.642Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-26T00:07:51.643Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220725170207-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-26T00:07:52.334Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:07:52.335Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:07:52.335Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:07:52.335Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-26T00:07:52.336Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:09:20 up  1:15,  0 users,  load average: 0.93, 0.84, 0.96
	Linux default-k8s-different-port-20220725170207-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [7aa32347e9be] <==
	* I0726 00:07:56.429207       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0726 00:07:57.039546       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0726 00:07:57.045377       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0726 00:07:57.055584       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0726 00:07:57.236494       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0726 00:08:09.715743       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0726 00:08:10.214068       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0726 00:08:11.638706       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0726 00:08:12.536759       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.80.2]
	W0726 00:08:13.418308       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:08:13.418366       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:08:13.418376       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0726 00:08:13.418254       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:08:13.418402       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:08:13.419809       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0726 00:08:13.524719       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.109.149.201]
	I0726 00:08:13.538360       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.162.27]
	W0726 00:09:13.375825       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:09:13.376256       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:09:13.376291       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0726 00:09:13.377997       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:09:13.378036       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:09:13.378043       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [841f90051695] <==
	* I0726 00:08:12.424102       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-2zfng"
	I0726 00:08:13.332476       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0726 00:08:13.340819       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0726 00:08:13.342257       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:08:13.345820       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.349123       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:08:13.353153       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:08:13.353505       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.353580       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.360416       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0726 00:08:13.360655       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.360712       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:08:13.361982       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.364551       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.364641       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.414965       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.415026       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0726 00:08:13.416401       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0726 00:08:13.416437       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0726 00:08:13.461747       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-tnpqb"
	I0726 00:08:13.461793       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-lxsld"
	E0726 00:08:39.357116       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0726 00:09:12.804890       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0726 00:09:12.809543       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0726 00:09:17.589688       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	
	* 
	* ==> kube-proxy [b08c57265ec6] <==
	* I0726 00:08:11.442484       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0726 00:08:11.442552       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0726 00:08:11.442610       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0726 00:08:11.631007       1 server_others.go:206] "Using iptables Proxier"
	I0726 00:08:11.631053       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0726 00:08:11.631063       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0726 00:08:11.631093       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0726 00:08:11.631110       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:08:11.631331       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:08:11.631522       1 server.go:661] "Version info" version="v1.24.3"
	I0726 00:08:11.631529       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:08:11.632673       1 config.go:317] "Starting service config controller"
	I0726 00:08:11.632680       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0726 00:08:11.632693       1 config.go:226] "Starting endpoint slice config controller"
	I0726 00:08:11.632696       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0726 00:08:11.633899       1 config.go:444] "Starting node config controller"
	I0726 00:08:11.633930       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0726 00:08:11.733469       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0726 00:08:11.733545       1 shared_informer.go:262] Caches are synced for service config
	I0726 00:08:11.734418       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e73d3ba5e1f3] <==
	* W0726 00:07:54.348205       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0726 00:07:54.348219       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0726 00:07:54.348569       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0726 00:07:54.348628       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0726 00:07:54.348952       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0726 00:07:54.348984       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0726 00:07:54.349258       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:07:54.349388       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:07:54.349640       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0726 00:07:54.349672       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0726 00:07:54.350245       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0726 00:07:54.350464       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0726 00:07:54.350764       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0726 00:07:54.350921       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0726 00:07:55.227042       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:07:55.227156       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:07:55.264707       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0726 00:07:55.264761       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0726 00:07:55.275458       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0726 00:07:55.275529       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0726 00:07:55.288705       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0726 00:07:55.288753       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0726 00:07:55.463979       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0726 00:07:55.464048       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0726 00:07:55.845861       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-07-26 00:03:19 UTC, end at Tue 2022-07-26 00:09:21 UTC. --
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284207    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/819703f3-8ea8-4843-983b-e8b99ff546e5-config-volume\") pod \"coredns-6d4b75cb6d-nl4gs\" (UID: \"819703f3-8ea8-4843-983b-e8b99ff546e5\") " pod="kube-system/coredns-6d4b75cb6d-nl4gs"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284221    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e86e20e1-ea9d-459e-9592-2c03c22354cc-kube-proxy\") pod \"kube-proxy-ldpkt\" (UID: \"e86e20e1-ea9d-459e-9592-2c03c22354cc\") " pod="kube-system/kube-proxy-ldpkt"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284235    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba4f819c-dca0-4e2b-a3a6-e411f7978c4e-tmp-dir\") pod \"metrics-server-5c6f97fb75-2zfng\" (UID: \"ba4f819c-dca0-4e2b-a3a6-e411f7978c4e\") " pod="kube-system/metrics-server-5c6f97fb75-2zfng"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284247    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/071ecba8-dbb4-4650-b0c1-80e4dd492eac-tmp\") pod \"storage-provisioner\" (UID: \"071ecba8-dbb4-4650-b0c1-80e4dd492eac\") " pod="kube-system/storage-provisioner"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284261    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgp8h\" (UniqueName: \"kubernetes.io/projected/e86e20e1-ea9d-459e-9592-2c03c22354cc-kube-api-access-qgp8h\") pod \"kube-proxy-ldpkt\" (UID: \"e86e20e1-ea9d-459e-9592-2c03c22354cc\") " pod="kube-system/kube-proxy-ldpkt"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284289    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47fb3e0a-7080-462a-910d-d9820f6f9eb2-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-lxsld\" (UID: \"47fb3e0a-7080-462a-910d-d9820f6f9eb2\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-lxsld"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284313    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ccxl\" (UniqueName: \"kubernetes.io/projected/819703f3-8ea8-4843-983b-e8b99ff546e5-kube-api-access-6ccxl\") pod \"coredns-6d4b75cb6d-nl4gs\" (UID: \"819703f3-8ea8-4843-983b-e8b99ff546e5\") " pod="kube-system/coredns-6d4b75cb6d-nl4gs"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284329    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26kk\" (UniqueName: \"kubernetes.io/projected/47fb3e0a-7080-462a-910d-d9820f6f9eb2-kube-api-access-v26kk\") pod \"kubernetes-dashboard-5fd5574d9f-lxsld\" (UID: \"47fb3e0a-7080-462a-910d-d9820f6f9eb2\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-lxsld"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284343    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e86e20e1-ea9d-459e-9592-2c03c22354cc-xtables-lock\") pod \"kube-proxy-ldpkt\" (UID: \"e86e20e1-ea9d-459e-9592-2c03c22354cc\") " pod="kube-system/kube-proxy-ldpkt"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284358    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/54a93336-f701-437f-87d7-2f4fa0355c1d-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-tnpqb\" (UID: \"54a93336-f701-437f-87d7-2f4fa0355c1d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tnpqb"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284373    9834 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzbx\" (UniqueName: \"kubernetes.io/projected/ba4f819c-dca0-4e2b-a3a6-e411f7978c4e-kube-api-access-5nzbx\") pod \"metrics-server-5c6f97fb75-2zfng\" (UID: \"ba4f819c-dca0-4e2b-a3a6-e411f7978c4e\") " pod="kube-system/metrics-server-5c6f97fb75-2zfng"
	Jul 26 00:09:14 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:14.284386    9834 reconciler.go:157] "Reconciler: start to sync state"
	Jul 26 00:09:15 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:15.451177    9834 request.go:601] Waited for 1.06539654s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/serviceaccounts/coredns/token
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.490307    9834 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.490381    9834 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.490498    9834 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5nzbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fil
e,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-2zfng_kube-system(ba4f819c-dca0-4e2b-a3a6-e411f7978c4e): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.490530    9834 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-2zfng" podUID=ba4f819c-dca0-4e2b-a3a6-e411f7978c4e
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:16.655563    9834 scope.go:110] "RemoveContainer" containerID="f5b14fa2d927367c59b6d733b9cfa34751f4283a51e6eb48c2d50af11e104d99"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.757930    9834 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220725170207-14919\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220725170207-14919"
	Jul 26 00:09:16 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:16.934470    9834 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220725170207-14919\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220725170207-14919"
	Jul 26 00:09:17 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:17.157004    9834 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220725170207-14919\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220725170207-14919"
	Jul 26 00:09:17 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:17.328641    9834 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220725170207-14919\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220725170207-14919"
	Jul 26 00:09:17 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:17.448591    9834 scope.go:110] "RemoveContainer" containerID="f5b14fa2d927367c59b6d733b9cfa34751f4283a51e6eb48c2d50af11e104d99"
	Jul 26 00:09:17 default-k8s-different-port-20220725170207-14919 kubelet[9834]: I0726 00:09:17.449554    9834 scope.go:110] "RemoveContainer" containerID="cfa80af081f87b34091a53d522387743397a5ab028340e69dddbec63e65053f0"
	Jul 26 00:09:17 default-k8s-different-port-20220725170207-14919 kubelet[9834]: E0726 00:09:17.450387    9834 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-tnpqb_kubernetes-dashboard(54a93336-f701-437f-87d7-2f4fa0355c1d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tnpqb" podUID=54a93336-f701-437f-87d7-2f4fa0355c1d
	
	* 
	* ==> kubernetes-dashboard [230dc0495ef0] <==
	* 2022/07/26 00:08:25 Starting overwatch
	2022/07/26 00:08:25 Using namespace: kubernetes-dashboard
	2022/07/26 00:08:25 Using in-cluster config to connect to apiserver
	2022/07/26 00:08:25 Using secret token for csrf signing
	2022/07/26 00:08:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/26 00:08:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/26 00:08:25 Successful initial request to the apiserver, version: v1.24.3
	2022/07/26 00:08:25 Generating JWE encryption key
	2022/07/26 00:08:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/26 00:08:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/26 00:08:25 Initializing JWE encryption key from synchronized object
	2022/07/26 00:08:25 Creating in-cluster Sidecar client
	2022/07/26 00:08:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/26 00:08:25 Serving insecurely on HTTP port: 9090
	2022/07/26 00:09:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [325507f96e42] <==
	* I0726 00:08:13.428135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0726 00:08:13.442765       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0726 00:08:13.442813       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0726 00:08:13.471916       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0726 00:08:13.472270       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220725170207-14919_91f05136-07a3-44b4-a7eb-b73612d79f6e!
	I0726 00:08:13.472885       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70a7cd8e-0a7c-4b8e-a4f8-5913815c490d", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220725170207-14919_91f05136-07a3-44b4-a7eb-b73612d79f6e became leader
	I0726 00:08:13.572783       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220725170207-14919_91f05136-07a3-44b4-a7eb-b73612d79f6e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220725170207-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-2zfng
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220725170207-14919 describe pod metrics-server-5c6f97fb75-2zfng
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220725170207-14919 describe pod metrics-server-5c6f97fb75-2zfng: exit status 1 (303.71948ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-2zfng" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220725170207-14919 describe pod metrics-server-5c6f97fb75-2zfng: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (43.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0725 17:09:45.862896   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:09:53.107658   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:10:18.999543   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:10:55.168912   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:10:57.264235   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 17:10:59.490072   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:11:10.710350   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:11:44.026115   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:11:56.001642   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:12:55.588309   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:12:55.594657   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:12:55.606836   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:12:55.629079   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:12:55.669264   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:12:55.749556   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:12:55.910900   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:12:56.233143   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:12:56.875309   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:12:58.155542   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:13:00.716469   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:13:05.836768   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:13:16.077113   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:13:17.311841   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 17:13:19.058101   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:13:30.477863   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:13:36.559557   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
E0725 17:13:41.278528   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:14:17.522168   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:14:40.363516   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 17:14:45.862870   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:14:53.111048   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:15:19.002322   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:15:39.445006   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 17:15:55.172432   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 17:15:57.266199   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 17:15:59.493754   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:50822/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
E0725 17:16:10.712359   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 17:16:44.028155   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 17:16:56.003561   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 17:17:55.590386   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 17:18:17.313777   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 17:18:23.286330   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/default-k8s-different-port-20220725170207-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 17:18:30.477805   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 17:18:41.278745   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 2 (429.923271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-20220725164610-14919" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220725164610-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220725164610-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (3.046µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220725164610-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725164610-14919
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725164610-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf",
	        "Created": "2022-07-25T23:46:16.38043483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T23:51:54.648798687Z",
	            "FinishedAt": "2022-07-25T23:51:51.718201115Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/hosts",
	        "LogPath": "/var/lib/docker/containers/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf/3e45dea9c0149860b5036e331206733d0f3a614b9342d09e1d56f10802133bbf-json.log",
	        "Name": "/old-k8s-version-20220725164610-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725164610-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725164610-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe1bfd7c21d08751e099f537657387a10067aae592a04321ebff9cdc71b600d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725164610-14919",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725164610-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725164610-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725164610-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1e8c374f85bd4349655b5dfcfe823620a484a31bb6415a2e0b8632dd020452f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50823"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50824"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50825"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50826"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50822"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1e8c374f85b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725164610-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3e45dea9c014",
	                        "old-k8s-version-20220725164610-14919"
	                    ],
	                    "NetworkID": "cc2155f0f89448c4255b6f474f0b34c64b5460d3acc5441984909bacee63d7d6",
	                    "EndpointID": "aa5034ea8648431be616c4e8025677bb27e250d86bdb70415b75ae2f6083245f",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 2 (428.820253ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220725164610-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220725164610-14919 logs -n 25: (3.511065223s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220725170207-14919      | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | disable-driver-mounts-20220725170207-14919                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725170926-14919 --memory=2200           | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:10 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725170926-14919 --memory=2200           | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:11 PDT | 25 Jul 22 17:11 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:11 PDT | 25 Jul 22 17:11 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:11 PDT | 25 Jul 22 17:11 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 17:10:24
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:10:24.417864   33162 out.go:296] Setting OutFile to fd 1 ...
	I0725 17:10:24.418039   33162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:10:24.418045   33162 out.go:309] Setting ErrFile to fd 2...
	I0725 17:10:24.418049   33162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:10:24.418146   33162 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 17:10:24.418606   33162 out.go:303] Setting JSON to false
	I0725 17:10:24.433673   33162 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11147,"bootTime":1658783077,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 17:10:24.433808   33162 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 17:10:24.455637   33162 out.go:177] * [newest-cni-20220725170926-14919] minikube v1.26.0 on Darwin 12.5
	I0725 17:10:24.497929   33162 notify.go:193] Checking for updates...
	I0725 17:10:24.519568   33162 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 17:10:24.540666   33162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:24.561854   33162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 17:10:24.583713   33162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:10:24.604874   33162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 17:10:24.627553   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:24.628229   33162 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 17:10:24.697874   33162 docker.go:137] docker version: linux-20.10.17
	I0725 17:10:24.698008   33162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:10:24.830389   33162 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:10:24.768579678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:10:24.873988   33162 out.go:177] * Using the docker driver based on existing profile
	I0725 17:10:24.895168   33162 start.go:284] selected driver: docker
	I0725 17:10:24.895244   33162 start.go:808] validating driver "docker" against &{Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:24.895483   33162 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:10:24.900000   33162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:10:25.035516   33162 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:10:24.970402659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:10:25.035678   33162 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 17:10:25.035704   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:25.035716   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:25.035730   33162 start_flags.go:310] config:
	{Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:25.056836   33162 out.go:177] * Starting control plane node newest-cni-20220725170926-14919 in cluster newest-cni-20220725170926-14919
	I0725 17:10:25.077918   33162 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 17:10:25.098824   33162 out.go:177] * Pulling base image ...
	I0725 17:10:25.141055   33162 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:10:25.141089   33162 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 17:10:25.141143   33162 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 17:10:25.141171   33162 cache.go:57] Caching tarball of preloaded images
	I0725 17:10:25.141419   33162 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 17:10:25.142092   33162 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 17:10:25.142479   33162 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/config.json ...
	I0725 17:10:25.206257   33162 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 17:10:25.206278   33162 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 17:10:25.206290   33162 cache.go:208] Successfully downloaded all kic artifacts
	I0725 17:10:25.206376   33162 start.go:370] acquiring machines lock for newest-cni-20220725170926-14919: {Name:mk0f9a30538ef211b73bc7dbc2b91673075b0931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:10:25.206461   33162 start.go:374] acquired machines lock for "newest-cni-20220725170926-14919" in 65.585µs
	I0725 17:10:25.206494   33162 start.go:95] Skipping create...Using existing machine configuration
	I0725 17:10:25.206504   33162 fix.go:55] fixHost starting: 
	I0725 17:10:25.206735   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:25.274150   33162 fix.go:103] recreateIfNeeded on newest-cni-20220725170926-14919: state=Stopped err=<nil>
	W0725 17:10:25.274212   33162 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 17:10:25.296502   33162 out.go:177] * Restarting existing docker container for "newest-cni-20220725170926-14919" ...
	I0725 17:10:25.322901   33162 cli_runner.go:164] Run: docker start newest-cni-20220725170926-14919
	I0725 17:10:25.670582   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:25.747051   33162 kic.go:415] container "newest-cni-20220725170926-14919" state is running.
	I0725 17:10:25.747947   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:25.835124   33162 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/config.json ...
	I0725 17:10:25.835685   33162 machine.go:88] provisioning docker machine ...
	I0725 17:10:25.835720   33162 ubuntu.go:169] provisioning hostname "newest-cni-20220725170926-14919"
	I0725 17:10:25.835849   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:25.920990   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:25.921209   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:25.921222   33162 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220725170926-14919 && echo "newest-cni-20220725170926-14919" | sudo tee /etc/hostname
	I0725 17:10:26.056106   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220725170926-14919
	
	I0725 17:10:26.056189   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:26.132180   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:26.132352   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:26.132376   33162 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220725170926-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220725170926-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220725170926-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:10:26.253967   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:10:26.253992   33162 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 17:10:26.254014   33162 ubuntu.go:177] setting up certificates
	I0725 17:10:26.254022   33162 provision.go:83] configureAuth start
	I0725 17:10:26.254089   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:26.331695   33162 provision.go:138] copyHostCerts
	I0725 17:10:26.331779   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 17:10:26.331794   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 17:10:26.331920   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 17:10:26.332199   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 17:10:26.332208   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 17:10:26.332337   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 17:10:26.332509   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 17:10:26.332515   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 17:10:26.332575   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 17:10:26.332689   33162 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220725170926-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220725170926-14919]
	I0725 17:10:26.717276   33162 provision.go:172] copyRemoteCerts
	I0725 17:10:26.717338   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:10:26.717382   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:26.790688   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:26.880826   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0725 17:10:26.897391   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 17:10:26.915109   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 17:10:26.931087   33162 provision.go:86] duration metric: configureAuth took 677.048653ms
	I0725 17:10:26.931102   33162 ubuntu.go:193] setting minikube options for container-runtime
	I0725 17:10:26.931259   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:26.931314   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.005264   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.005412   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.005427   33162 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 17:10:27.129482   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 17:10:27.129493   33162 ubuntu.go:71] root file system type: overlay
	I0725 17:10:27.129635   33162 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 17:10:27.129721   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.201716   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.201890   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.201948   33162 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 17:10:27.330950   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 17:10:27.331083   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.403684   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.403852   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.403866   33162 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 17:10:27.528530   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:10:27.528549   33162 machine.go:91] provisioned docker machine in 1.692843192s
	I0725 17:10:27.528563   33162 start.go:307] post-start starting for "newest-cni-20220725170926-14919" (driver="docker")
	I0725 17:10:27.528570   33162 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:10:27.528633   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:10:27.528689   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.600159   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:27.688418   33162 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:10:27.691836   33162 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 17:10:27.691852   33162 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 17:10:27.691859   33162 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 17:10:27.691864   33162 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 17:10:27.691873   33162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 17:10:27.691979   33162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 17:10:27.692128   33162 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 17:10:27.692274   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:10:27.699346   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:10:27.715708   33162 start.go:310] post-start completed in 187.135858ms
	I0725 17:10:27.715797   33162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:10:27.715855   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.789256   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:27.875730   33162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 17:10:27.880586   33162 fix.go:57] fixHost completed within 2.674056608s
	I0725 17:10:27.880604   33162 start.go:82] releasing machines lock for "newest-cni-20220725170926-14919", held for 2.674115777s
	I0725 17:10:27.880683   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:27.952738   33162 ssh_runner.go:195] Run: systemctl --version
	I0725 17:10:27.952766   33162 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 17:10:27.952822   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.952837   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:28.035925   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:28.037689   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:28.122072   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 17:10:28.343268   33162 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0725 17:10:28.355683   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:28.420676   33162 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0725 17:10:28.498590   33162 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 17:10:28.508832   33162 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 17:10:28.508892   33162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 17:10:28.518005   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:10:28.530341   33162 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 17:10:28.596050   33162 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 17:10:28.659049   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:28.725708   33162 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 17:10:28.962213   33162 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 17:10:29.032359   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:29.104371   33162 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 17:10:29.114153   33162 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 17:10:29.114219   33162 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 17:10:29.117753   33162 start.go:471] Will wait 60s for crictl version
	I0725 17:10:29.117794   33162 ssh_runner.go:195] Run: sudo crictl version
	I0725 17:10:29.147467   33162 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 17:10:29.147535   33162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:10:29.184126   33162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:10:29.262105   33162 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 17:10:29.262296   33162 cli_runner.go:164] Run: docker exec -t newest-cni-20220725170926-14919 dig +short host.docker.internal
	I0725 17:10:29.395521   33162 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 17:10:29.395785   33162 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 17:10:29.399754   33162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:10:29.409524   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:29.503728   33162 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0725 17:10:29.524653   33162 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:10:29.524731   33162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:10:29.558092   33162 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 17:10:29.558111   33162 docker.go:542] Images already preloaded, skipping extraction
	I0725 17:10:29.558184   33162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:10:29.587899   33162 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 17:10:29.587918   33162 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:10:29.588031   33162 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 17:10:29.663722   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:29.663735   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:29.663750   33162 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0725 17:10:29.663767   33162 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220725170926-14919 NodeName:newest-cni-20220725170926-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 17:10:29.663896   33162 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220725170926-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:10:29.664003   33162 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220725170926-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 17:10:29.664069   33162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 17:10:29.671642   33162 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:10:29.671692   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 17:10:29.678773   33162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0725 17:10:29.691506   33162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:10:29.704307   33162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0725 17:10:29.717632   33162 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 17:10:29.721370   33162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:10:29.730835   33162 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919 for IP: 192.168.76.2
	I0725 17:10:29.730956   33162 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 17:10:29.731012   33162 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 17:10:29.731101   33162 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/client.key
	I0725 17:10:29.731184   33162 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.key.31bdca25
	I0725 17:10:29.731238   33162 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.key
	I0725 17:10:29.731449   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 17:10:29.731486   33162 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 17:10:29.731499   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:10:29.731529   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 17:10:29.731557   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:10:29.731584   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 17:10:29.731661   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:10:29.732224   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 17:10:29.749516   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:10:29.767634   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:10:29.784829   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:10:29.802003   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:10:29.819158   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 17:10:29.837643   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:10:29.854418   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 17:10:29.871121   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 17:10:29.888831   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:10:29.906470   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 17:10:29.923739   33162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:10:29.935970   33162 ssh_runner.go:195] Run: openssl version
	I0725 17:10:29.941798   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 17:10:29.949647   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.953437   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.953475   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.958685   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:10:29.965689   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:10:29.973553   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.977512   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.977554   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.984480   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:10:29.991634   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 17:10:29.999492   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.003199   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.003247   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.008320   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 17:10:30.015441   33162 kubeadm.go:395] StartCluster: {Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:30.015575   33162 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:10:30.043891   33162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:10:30.051217   33162 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 17:10:30.051232   33162 kubeadm.go:626] restartCluster start
	I0725 17:10:30.051280   33162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 17:10:30.057850   33162 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.057966   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:30.133450   33162 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220725170926-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:30.133609   33162 kubeconfig.go:127] "newest-cni-20220725170926-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 17:10:30.133957   33162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:30.135316   33162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 17:10:30.142665   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.142722   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.150789   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.350928   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.351070   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.360111   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.551272   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.551407   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.562094   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.751690   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.751824   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.761903   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.952947   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.953087   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.963586   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.152852   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.153026   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.163487   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.350935   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.351078   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.360517   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.552584   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.552823   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.563420   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.752110   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.752218   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.763404   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.952598   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.952755   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.963313   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.152570   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.152722   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.163109   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.352596   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.352784   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.363770   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.550939   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.551002   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.560558   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.752982   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.753160   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.763614   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.951083   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.951172   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.960400   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.153040   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:33.153150   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:33.163587   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.163603   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:33.163648   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:33.171326   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.171337   33162 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 17:10:33.171344   33162 kubeadm.go:1092] stopping kube-system containers ...
	I0725 17:10:33.171406   33162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:10:33.202416   33162 docker.go:443] Stopping containers: [bb9e40d7b806 2e2b1e12a0d8 e3093a0bea73 a5c118b426c2 0f325df2490e b56e26e25b9e 78d80d7126ed eb8d77894732 c00a5e112263 54430765218a 22c1ccaaf65a 264f85de3b55 1ae34c8051d5 7e75f9965e1a 0c966b0d8030 caf103a64c25 3a3b08020459]
	I0725 17:10:33.202492   33162 ssh_runner.go:195] Run: docker stop bb9e40d7b806 2e2b1e12a0d8 e3093a0bea73 a5c118b426c2 0f325df2490e b56e26e25b9e 78d80d7126ed eb8d77894732 c00a5e112263 54430765218a 22c1ccaaf65a 264f85de3b55 1ae34c8051d5 7e75f9965e1a 0c966b0d8030 caf103a64c25 3a3b08020459
	I0725 17:10:33.234063   33162 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 17:10:33.245377   33162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:10:33.253298   33162 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 26 00:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 26 00:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 26 00:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 26 00:09 /etc/kubernetes/scheduler.conf
	
	I0725 17:10:33.253358   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 17:10:33.261429   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 17:10:33.269924   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 17:10:33.277451   33162 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.277515   33162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:10:33.285562   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 17:10:33.294082   33162 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.294144   33162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:10:33.301728   33162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:10:33.309325   33162 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 17:10:33.309339   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:33.357983   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:33.990711   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.163018   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.211887   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.268693   33162 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:10:34.268801   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:34.814412   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:35.314380   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:35.329818   33162 api_server.go:71] duration metric: took 1.061125837s to wait for apiserver process to appear ...
	I0725 17:10:35.329834   33162 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:10:35.329847   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:35.331369   33162 api_server.go:256] stopped: https://127.0.0.1:52980/healthz: Get "https://127.0.0.1:52980/healthz": EOF
	I0725 17:10:35.832966   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:38.791798   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0725 17:10:38.791817   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0725 17:10:38.831613   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:38.839025   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:38.839048   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:39.331548   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:39.340855   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:39.340870   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:39.831506   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:39.837149   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:39.837177   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:40.331504   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:40.338177   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 200:
	ok
	I0725 17:10:40.344835   33162 api_server.go:140] control plane version: v1.24.3
	I0725 17:10:40.344850   33162 api_server.go:130] duration metric: took 5.014977391s to wait for apiserver health ...
	I0725 17:10:40.344856   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:40.344860   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:40.344872   33162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:10:40.352662   33162 system_pods.go:59] 9 kube-system pods found
	I0725 17:10:40.352682   33162 system_pods.go:61] "coredns-6d4b75cb6d-dmnl4" [75f79fe8-36b7-421f-bb6c-f04ddc553086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:40.352688   33162 system_pods.go:61] "coredns-6d4b75cb6d-nwgth" [9423c7c6-992c-437c-ad7e-28a2ab1eecdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:40.352693   33162 system_pods.go:61] "etcd-newest-cni-20220725170926-14919" [7aca802c-2727-4227-9c2c-c969f0a334cf] Running
	I0725 17:10:40.352697   33162 system_pods.go:61] "kube-apiserver-newest-cni-20220725170926-14919" [aa239dc3-e3c0-4446-957a-24cd198cbb3c] Running
	I0725 17:10:40.352701   33162 system_pods.go:61] "kube-controller-manager-newest-cni-20220725170926-14919" [a5400bd1-f383-426d-b6f6-265553b518ea] Running
	I0725 17:10:40.352704   33162 system_pods.go:61] "kube-proxy-thgm5" [2bd1bc65-9c26-4b8e-86b9-3e0bd3599e69] Running
	I0725 17:10:40.352709   33162 system_pods.go:61] "kube-scheduler-newest-cni-20220725170926-14919" [9eaafaf3-71e5-4e23-8f04-0b6c5c8e1357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 17:10:40.352718   33162 system_pods.go:61] "metrics-server-5c6f97fb75-lsp4c" [6751fa1e-1d48-4008-9432-cdac2124118b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:10:40.352722   33162 system_pods.go:61] "storage-provisioner" [50d8c534-72e2-4185-b2d1-5ce19567413e] Running
	I0725 17:10:40.352726   33162 system_pods.go:74] duration metric: took 7.849401ms to wait for pod list to return data ...
	I0725 17:10:40.352733   33162 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:10:40.355773   33162 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:10:40.355787   33162 node_conditions.go:123] node cpu capacity is 6
	I0725 17:10:40.355801   33162 node_conditions.go:105] duration metric: took 3.065104ms to run NodePressure ...
	I0725 17:10:40.355813   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:40.557739   33162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:10:40.604152   33162 ops.go:34] apiserver oom_adj: -16
	I0725 17:10:40.604170   33162 kubeadm.go:630] restartCluster took 10.552861404s
	I0725 17:10:40.604181   33162 kubeadm.go:397] StartCluster complete in 10.58867566s
	I0725 17:10:40.604201   33162 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:40.604299   33162 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:40.605113   33162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:40.609196   33162 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220725170926-14919" rescaled to 1
	I0725 17:10:40.609249   33162 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 17:10:40.609304   33162 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 17:10:40.609312   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:10:40.633783   33162 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633785   33162 addons.go:65] Setting dashboard=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633802   33162 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633805   33162 addons.go:153] Setting addon dashboard=true in "newest-cni-20220725170926-14919"
	I0725 17:10:40.633804   33162 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220725170926-14919"
	I0725 17:10:40.609473   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:40.633819   33162 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220725170926-14919"
	W0725 17:10:40.633826   33162 addons.go:162] addon metrics-server should already be in state true
	W0725 17:10:40.633817   33162 addons.go:162] addon storage-provisioner should already be in state true
	I0725 17:10:40.633679   33162 out.go:177] * Verifying Kubernetes components...
	I0725 17:10:40.633830   33162 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220725170926-14919"
	W0725 17:10:40.633816   33162 addons.go:162] addon dashboard should already be in state true
	I0725 17:10:40.633869   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.691865   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.633872   33162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220725170926-14919"
	I0725 17:10:40.691912   33162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:10:40.633885   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.692553   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692555   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692555   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692649   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.825703   33162 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220725170926-14919"
	W0725 17:10:40.896300   33162 addons.go:162] addon default-storageclass should already be in state true
	I0725 17:10:40.896340   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.826795   33162 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0725 17:10:40.826824   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:40.837858   33162 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 17:10:40.859007   33162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:10:40.896237   33162 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 17:10:40.898501   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.996949   33162 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 17:10:40.939331   33162 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:10:40.976132   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:10:40.997061   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:10:41.035393   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 17:10:41.035423   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 17:10:41.035395   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:10:41.035542   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.035653   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.035666   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.062617   33162 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:10:41.062869   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:41.064899   33162 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:10:41.064918   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:10:41.065018   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.078455   33162 api_server.go:71] duration metric: took 469.124908ms to wait for apiserver process to appear ...
	I0725 17:10:41.078510   33162 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:10:41.078542   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:41.088995   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 200:
	ok
	I0725 17:10:41.090707   33162 api_server.go:140] control plane version: v1.24.3
	I0725 17:10:41.090725   33162 api_server.go:130] duration metric: took 12.204434ms to wait for apiserver health ...
	I0725 17:10:41.090732   33162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:10:41.100882   33162 system_pods.go:59] 9 kube-system pods found
	I0725 17:10:41.100912   33162 system_pods.go:61] "coredns-6d4b75cb6d-dmnl4" [75f79fe8-36b7-421f-bb6c-f04ddc553086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:41.100936   33162 system_pods.go:61] "coredns-6d4b75cb6d-nwgth" [9423c7c6-992c-437c-ad7e-28a2ab1eecdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:41.100947   33162 system_pods.go:61] "etcd-newest-cni-20220725170926-14919" [7aca802c-2727-4227-9c2c-c969f0a334cf] Running
	I0725 17:10:41.100956   33162 system_pods.go:61] "kube-apiserver-newest-cni-20220725170926-14919" [aa239dc3-e3c0-4446-957a-24cd198cbb3c] Running
	I0725 17:10:41.100967   33162 system_pods.go:61] "kube-controller-manager-newest-cni-20220725170926-14919" [a5400bd1-f383-426d-b6f6-265553b518ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 17:10:41.100973   33162 system_pods.go:61] "kube-proxy-thgm5" [2bd1bc65-9c26-4b8e-86b9-3e0bd3599e69] Running
	I0725 17:10:41.100990   33162 system_pods.go:61] "kube-scheduler-newest-cni-20220725170926-14919" [9eaafaf3-71e5-4e23-8f04-0b6c5c8e1357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 17:10:41.100996   33162 system_pods.go:61] "metrics-server-5c6f97fb75-lsp4c" [6751fa1e-1d48-4008-9432-cdac2124118b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:10:41.101006   33162 system_pods.go:61] "storage-provisioner" [50d8c534-72e2-4185-b2d1-5ce19567413e] Running
	I0725 17:10:41.101012   33162 system_pods.go:74] duration metric: took 10.276317ms to wait for pod list to return data ...
	I0725 17:10:41.101018   33162 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:10:41.104454   33162 default_sa.go:45] found service account: "default"
	I0725 17:10:41.104471   33162 default_sa.go:55] duration metric: took 3.4456ms for default service account to be created ...
	I0725 17:10:41.104481   33162 kubeadm.go:572] duration metric: took 495.187773ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0725 17:10:41.104501   33162 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:10:41.109202   33162 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:10:41.109220   33162 node_conditions.go:123] node cpu capacity is 6
	I0725 17:10:41.109230   33162 node_conditions.go:105] duration metric: took 4.725267ms to run NodePressure ...
	I0725 17:10:41.109240   33162 start.go:216] waiting for startup goroutines ...
	I0725 17:10:41.154538   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.155606   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.159137   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.171747   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.277597   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:10:41.277615   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 17:10:41.277691   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 17:10:41.277701   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 17:10:41.288691   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:10:41.300595   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:10:41.300646   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:10:41.305575   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 17:10:41.305588   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 17:10:41.305589   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:10:41.320296   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:10:41.320311   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:10:41.325064   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 17:10:41.325088   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 17:10:41.343386   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:10:41.352726   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 17:10:41.352746   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 17:10:41.429907   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 17:10:41.429923   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 17:10:41.447523   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 17:10:41.447537   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 17:10:41.516266   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 17:10:41.516285   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 17:10:41.535836   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 17:10:41.535850   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 17:10:41.554058   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:10:41.554073   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 17:10:41.572255   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:10:42.151794   33162 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220725170926-14919"
	I0725 17:10:42.288108   33162 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 17:10:42.345471   33162 addons.go:414] enableAddons completed in 1.736158741s
	I0725 17:10:42.381249   33162 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 17:10:42.403296   33162 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220725170926-14919" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 23:51:54 UTC, end at Tue 2022-07-26 00:18:52 UTC. --
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.280625561Z" level=info msg="Processing signal 'terminated'"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.281621938Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[130]: time="2022-07-25T23:51:57.282179113Z" level=info msg="Daemon shutdown complete"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: docker.service: Succeeded.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.333388918Z" level=info msg="Starting up"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335280455Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335321821Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335353731Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.335365331Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336739849Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336771694Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336792129Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.336802010Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.340124810Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.344053927Z" level=info msg="Loading containers: start."
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.416564242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.446250062Z" level=info msg="Loading containers: done."
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.454564731Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.454620735Z" level=info msg="Daemon has completed initialization"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 systemd[1]: Started Docker Application Container Engine.
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.478491259Z" level=info msg="API listen on [::]:2376"
	Jul 25 23:51:57 old-k8s-version-20220725164610-14919 dockerd[427]: time="2022-07-25T23:51:57.481408702Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-07-26T00:18:54Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:18:54 up  1:25,  0 users,  load average: 0.32, 0.41, 0.71
	Linux old-k8s-version-20220725164610-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 23:51:54 UTC, end at Tue 2022-07-26 00:18:54 UTC. --
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1670.
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 kubelet[34299]: I0726 00:18:53.986482   34299 server.go:410] Version: v1.16.0
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 kubelet[34299]: I0726 00:18:53.986796   34299 plugins.go:100] No cloud provider specified.
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 kubelet[34299]: I0726 00:18:53.986812   34299 server.go:773] Client rotation is on, will bootstrap in background
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 kubelet[34299]: I0726 00:18:53.988891   34299 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 kubelet[34299]: W0726 00:18:53.990408   34299 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 kubelet[34299]: W0726 00:18:53.990520   34299 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 kubelet[34299]: F0726 00:18:53.990597   34299 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 26 00:18:53 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1671.
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 kubelet[34329]: I0726 00:18:54.723837   34329 server.go:410] Version: v1.16.0
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 kubelet[34329]: I0726 00:18:54.724695   34329 plugins.go:100] No cloud provider specified.
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 kubelet[34329]: I0726 00:18:54.724763   34329 server.go:773] Client rotation is on, will bootstrap in background
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 kubelet[34329]: I0726 00:18:54.726420   34329 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 kubelet[34329]: W0726 00:18:54.727082   34329 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 kubelet[34329]: W0726 00:18:54.727166   34329 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 kubelet[34329]: F0726 00:18:54.727229   34329 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 26 00:18:54 old-k8s-version-20220725164610-14919 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 17:18:54.677665   33952 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 2 (438.63095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220725164610-14919" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (50.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220725170926-14919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919: exit status 2 (16.123635449s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919: exit status 2 (16.111485615s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220725170926-14919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220725170926-14919
helpers_test.go:235: (dbg) docker inspect newest-cni-20220725170926-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21",
	        "Created": "2022-07-26T00:09:33.395820497Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311558,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-26T00:10:25.67552367Z",
	            "FinishedAt": "2022-07-26T00:10:23.755408995Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21/hostname",
	        "HostsPath": "/var/lib/docker/containers/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21/hosts",
	        "LogPath": "/var/lib/docker/containers/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21-json.log",
	        "Name": "/newest-cni-20220725170926-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220725170926-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220725170926-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9988b34e684b73313054847e6562f040595dcfd62c5c949651e81ffcf9758814-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9988b34e684b73313054847e6562f040595dcfd62c5c949651e81ffcf9758814/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9988b34e684b73313054847e6562f040595dcfd62c5c949651e81ffcf9758814/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9988b34e684b73313054847e6562f040595dcfd62c5c949651e81ffcf9758814/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220725170926-14919",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220725170926-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220725170926-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220725170926-14919",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220725170926-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "412c4e11789871b4076bac2436085935ccc7d29b3d5994ca0d062757f0371991",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52976"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52979"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52980"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/412c4e117898",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220725170926-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "70670a22d2f6",
	                        "newest-cni-20220725170926-14919"
	                    ],
	                    "NetworkID": "6ab3bfd3244da82ffafcd9e785631e57ff855e44a8e471d53739ac11b8e548ef",
	                    "EndpointID": "d08e6ac741221eb652f059400fbb2d5dff6b744e1ae6f2b1f4f34bde073b4428",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220725170926-14919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220725170926-14919 logs -n 25: (4.233406465s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220725170207-14919      | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | disable-driver-mounts-20220725170207-14919                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725170926-14919 --memory=2200           | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:10 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725170926-14919 --memory=2200           | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:11 PDT | 25 Jul 22 17:11 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 17:10:24
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:10:24.417864   33162 out.go:296] Setting OutFile to fd 1 ...
	I0725 17:10:24.418039   33162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:10:24.418045   33162 out.go:309] Setting ErrFile to fd 2...
	I0725 17:10:24.418049   33162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:10:24.418146   33162 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 17:10:24.418606   33162 out.go:303] Setting JSON to false
	I0725 17:10:24.433673   33162 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11147,"bootTime":1658783077,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 17:10:24.433808   33162 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 17:10:24.455637   33162 out.go:177] * [newest-cni-20220725170926-14919] minikube v1.26.0 on Darwin 12.5
	I0725 17:10:24.497929   33162 notify.go:193] Checking for updates...
	I0725 17:10:24.519568   33162 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 17:10:24.540666   33162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:24.561854   33162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 17:10:24.583713   33162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:10:24.604874   33162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 17:10:24.627553   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:24.628229   33162 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 17:10:24.697874   33162 docker.go:137] docker version: linux-20.10.17
	I0725 17:10:24.698008   33162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:10:24.830389   33162 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:10:24.768579678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:10:24.873988   33162 out.go:177] * Using the docker driver based on existing profile
	I0725 17:10:24.895168   33162 start.go:284] selected driver: docker
	I0725 17:10:24.895244   33162 start.go:808] validating driver "docker" against &{Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:24.895483   33162 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:10:24.900000   33162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:10:25.035516   33162 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:10:24.970402659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:10:25.035678   33162 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 17:10:25.035704   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:25.035716   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:25.035730   33162 start_flags.go:310] config:
	{Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:25.056836   33162 out.go:177] * Starting control plane node newest-cni-20220725170926-14919 in cluster newest-cni-20220725170926-14919
	I0725 17:10:25.077918   33162 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 17:10:25.098824   33162 out.go:177] * Pulling base image ...
	I0725 17:10:25.141055   33162 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:10:25.141089   33162 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 17:10:25.141143   33162 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 17:10:25.141171   33162 cache.go:57] Caching tarball of preloaded images
	I0725 17:10:25.141419   33162 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 17:10:25.142092   33162 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 17:10:25.142479   33162 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/config.json ...
	I0725 17:10:25.206257   33162 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 17:10:25.206278   33162 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 17:10:25.206290   33162 cache.go:208] Successfully downloaded all kic artifacts
	I0725 17:10:25.206376   33162 start.go:370] acquiring machines lock for newest-cni-20220725170926-14919: {Name:mk0f9a30538ef211b73bc7dbc2b91673075b0931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:10:25.206461   33162 start.go:374] acquired machines lock for "newest-cni-20220725170926-14919" in 65.585µs
	I0725 17:10:25.206494   33162 start.go:95] Skipping create...Using existing machine configuration
	I0725 17:10:25.206504   33162 fix.go:55] fixHost starting: 
	I0725 17:10:25.206735   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:25.274150   33162 fix.go:103] recreateIfNeeded on newest-cni-20220725170926-14919: state=Stopped err=<nil>
	W0725 17:10:25.274212   33162 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 17:10:25.296502   33162 out.go:177] * Restarting existing docker container for "newest-cni-20220725170926-14919" ...
	I0725 17:10:25.322901   33162 cli_runner.go:164] Run: docker start newest-cni-20220725170926-14919
	I0725 17:10:25.670582   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:25.747051   33162 kic.go:415] container "newest-cni-20220725170926-14919" state is running.
	I0725 17:10:25.747947   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:25.835124   33162 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/config.json ...
	I0725 17:10:25.835685   33162 machine.go:88] provisioning docker machine ...
	I0725 17:10:25.835720   33162 ubuntu.go:169] provisioning hostname "newest-cni-20220725170926-14919"
	I0725 17:10:25.835849   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:25.920990   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:25.921209   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:25.921222   33162 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220725170926-14919 && echo "newest-cni-20220725170926-14919" | sudo tee /etc/hostname
	I0725 17:10:26.056106   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220725170926-14919
	
	I0725 17:10:26.056189   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:26.132180   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:26.132352   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:26.132376   33162 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220725170926-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220725170926-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220725170926-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:10:26.253967   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:10:26.253992   33162 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 17:10:26.254014   33162 ubuntu.go:177] setting up certificates
	I0725 17:10:26.254022   33162 provision.go:83] configureAuth start
	I0725 17:10:26.254089   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:26.331695   33162 provision.go:138] copyHostCerts
	I0725 17:10:26.331779   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 17:10:26.331794   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 17:10:26.331920   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 17:10:26.332199   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 17:10:26.332208   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 17:10:26.332337   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 17:10:26.332509   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 17:10:26.332515   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 17:10:26.332575   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 17:10:26.332689   33162 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220725170926-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220725170926-14919]
	I0725 17:10:26.717276   33162 provision.go:172] copyRemoteCerts
	I0725 17:10:26.717338   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:10:26.717382   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:26.790688   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:26.880826   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0725 17:10:26.897391   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 17:10:26.915109   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 17:10:26.931087   33162 provision.go:86] duration metric: configureAuth took 677.048653ms
	I0725 17:10:26.931102   33162 ubuntu.go:193] setting minikube options for container-runtime
	I0725 17:10:26.931259   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:26.931314   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.005264   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.005412   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.005427   33162 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 17:10:27.129482   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 17:10:27.129493   33162 ubuntu.go:71] root file system type: overlay
	I0725 17:10:27.129635   33162 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 17:10:27.129721   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.201716   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.201890   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.201948   33162 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 17:10:27.330950   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 17:10:27.331083   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.403684   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.403852   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.403866   33162 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 17:10:27.528530   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:10:27.528549   33162 machine.go:91] provisioned docker machine in 1.692843192s
	I0725 17:10:27.528563   33162 start.go:307] post-start starting for "newest-cni-20220725170926-14919" (driver="docker")
	I0725 17:10:27.528570   33162 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:10:27.528633   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:10:27.528689   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.600159   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:27.688418   33162 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:10:27.691836   33162 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 17:10:27.691852   33162 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 17:10:27.691859   33162 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 17:10:27.691864   33162 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 17:10:27.691873   33162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 17:10:27.691979   33162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 17:10:27.692128   33162 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 17:10:27.692274   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:10:27.699346   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:10:27.715708   33162 start.go:310] post-start completed in 187.135858ms
	I0725 17:10:27.715797   33162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:10:27.715855   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.789256   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:27.875730   33162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 17:10:27.880586   33162 fix.go:57] fixHost completed within 2.674056608s
	I0725 17:10:27.880604   33162 start.go:82] releasing machines lock for "newest-cni-20220725170926-14919", held for 2.674115777s
	I0725 17:10:27.880683   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:27.952738   33162 ssh_runner.go:195] Run: systemctl --version
	I0725 17:10:27.952766   33162 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 17:10:27.952822   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.952837   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:28.035925   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:28.037689   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:28.122072   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 17:10:28.343268   33162 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0725 17:10:28.355683   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:28.420676   33162 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0725 17:10:28.498590   33162 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 17:10:28.508832   33162 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 17:10:28.508892   33162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 17:10:28.518005   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:10:28.530341   33162 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 17:10:28.596050   33162 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 17:10:28.659049   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:28.725708   33162 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 17:10:28.962213   33162 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 17:10:29.032359   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:29.104371   33162 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 17:10:29.114153   33162 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 17:10:29.114219   33162 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 17:10:29.117753   33162 start.go:471] Will wait 60s for crictl version
	I0725 17:10:29.117794   33162 ssh_runner.go:195] Run: sudo crictl version
	I0725 17:10:29.147467   33162 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 17:10:29.147535   33162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:10:29.184126   33162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:10:29.262105   33162 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 17:10:29.262296   33162 cli_runner.go:164] Run: docker exec -t newest-cni-20220725170926-14919 dig +short host.docker.internal
	I0725 17:10:29.395521   33162 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 17:10:29.395785   33162 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 17:10:29.399754   33162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:10:29.409524   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:29.503728   33162 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0725 17:10:29.524653   33162 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:10:29.524731   33162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:10:29.558092   33162 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 17:10:29.558111   33162 docker.go:542] Images already preloaded, skipping extraction
	I0725 17:10:29.558184   33162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:10:29.587899   33162 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 17:10:29.587918   33162 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:10:29.588031   33162 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 17:10:29.663722   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:29.663735   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:29.663750   33162 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0725 17:10:29.663767   33162 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220725170926-14919 NodeName:newest-cni-20220725170926-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 17:10:29.663896   33162 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220725170926-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:10:29.664003   33162 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220725170926-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 17:10:29.664069   33162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 17:10:29.671642   33162 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:10:29.671692   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 17:10:29.678773   33162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0725 17:10:29.691506   33162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:10:29.704307   33162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0725 17:10:29.717632   33162 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 17:10:29.721370   33162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:10:29.730835   33162 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919 for IP: 192.168.76.2
	I0725 17:10:29.730956   33162 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 17:10:29.731012   33162 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 17:10:29.731101   33162 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/client.key
	I0725 17:10:29.731184   33162 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.key.31bdca25
	I0725 17:10:29.731238   33162 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.key
	I0725 17:10:29.731449   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 17:10:29.731486   33162 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 17:10:29.731499   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:10:29.731529   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 17:10:29.731557   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:10:29.731584   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 17:10:29.731661   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:10:29.732224   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 17:10:29.749516   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:10:29.767634   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:10:29.784829   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:10:29.802003   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:10:29.819158   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 17:10:29.837643   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:10:29.854418   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 17:10:29.871121   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 17:10:29.888831   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:10:29.906470   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 17:10:29.923739   33162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:10:29.935970   33162 ssh_runner.go:195] Run: openssl version
	I0725 17:10:29.941798   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 17:10:29.949647   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.953437   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.953475   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.958685   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:10:29.965689   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:10:29.973553   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.977512   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.977554   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.984480   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:10:29.991634   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 17:10:29.999492   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.003199   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.003247   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.008320   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 17:10:30.015441   33162 kubeadm.go:395] StartCluster: {Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:30.015575   33162 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:10:30.043891   33162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:10:30.051217   33162 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 17:10:30.051232   33162 kubeadm.go:626] restartCluster start
	I0725 17:10:30.051280   33162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 17:10:30.057850   33162 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.057966   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:30.133450   33162 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220725170926-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:30.133609   33162 kubeconfig.go:127] "newest-cni-20220725170926-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 17:10:30.133957   33162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:30.135316   33162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 17:10:30.142665   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.142722   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.150789   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.350928   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.351070   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.360111   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.551272   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.551407   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.562094   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.751690   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.751824   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.761903   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.952947   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.953087   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.963586   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.152852   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.153026   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.163487   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.350935   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.351078   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.360517   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.552584   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.552823   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.563420   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.752110   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.752218   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.763404   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.952598   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.952755   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.963313   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.152570   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.152722   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.163109   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.352596   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.352784   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.363770   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.550939   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.551002   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.560558   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.752982   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.753160   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.763614   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.951083   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.951172   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.960400   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.153040   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:33.153150   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:33.163587   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.163603   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:33.163648   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:33.171326   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.171337   33162 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 17:10:33.171344   33162 kubeadm.go:1092] stopping kube-system containers ...
	I0725 17:10:33.171406   33162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:10:33.202416   33162 docker.go:443] Stopping containers: [bb9e40d7b806 2e2b1e12a0d8 e3093a0bea73 a5c118b426c2 0f325df2490e b56e26e25b9e 78d80d7126ed eb8d77894732 c00a5e112263 54430765218a 22c1ccaaf65a 264f85de3b55 1ae34c8051d5 7e75f9965e1a 0c966b0d8030 caf103a64c25 3a3b08020459]
	I0725 17:10:33.202492   33162 ssh_runner.go:195] Run: docker stop bb9e40d7b806 2e2b1e12a0d8 e3093a0bea73 a5c118b426c2 0f325df2490e b56e26e25b9e 78d80d7126ed eb8d77894732 c00a5e112263 54430765218a 22c1ccaaf65a 264f85de3b55 1ae34c8051d5 7e75f9965e1a 0c966b0d8030 caf103a64c25 3a3b08020459
	I0725 17:10:33.234063   33162 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 17:10:33.245377   33162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:10:33.253298   33162 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 26 00:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 26 00:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 26 00:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 26 00:09 /etc/kubernetes/scheduler.conf
	
	I0725 17:10:33.253358   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 17:10:33.261429   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 17:10:33.269924   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 17:10:33.277451   33162 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.277515   33162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:10:33.285562   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 17:10:33.294082   33162 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.294144   33162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:10:33.301728   33162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:10:33.309325   33162 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 17:10:33.309339   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:33.357983   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:33.990711   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.163018   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.211887   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.268693   33162 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:10:34.268801   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:34.814412   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:35.314380   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:35.329818   33162 api_server.go:71] duration metric: took 1.061125837s to wait for apiserver process to appear ...
	I0725 17:10:35.329834   33162 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:10:35.329847   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:35.331369   33162 api_server.go:256] stopped: https://127.0.0.1:52980/healthz: Get "https://127.0.0.1:52980/healthz": EOF
	I0725 17:10:35.832966   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:38.791798   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0725 17:10:38.791817   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0725 17:10:38.831613   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:38.839025   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:38.839048   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:39.331548   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:39.340855   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:39.340870   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:39.831506   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:39.837149   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:39.837177   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:40.331504   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:40.338177   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 200:
	ok
	I0725 17:10:40.344835   33162 api_server.go:140] control plane version: v1.24.3
	I0725 17:10:40.344850   33162 api_server.go:130] duration metric: took 5.014977391s to wait for apiserver health ...
	I0725 17:10:40.344856   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:40.344860   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:40.344872   33162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:10:40.352662   33162 system_pods.go:59] 9 kube-system pods found
	I0725 17:10:40.352682   33162 system_pods.go:61] "coredns-6d4b75cb6d-dmnl4" [75f79fe8-36b7-421f-bb6c-f04ddc553086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:40.352688   33162 system_pods.go:61] "coredns-6d4b75cb6d-nwgth" [9423c7c6-992c-437c-ad7e-28a2ab1eecdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:40.352693   33162 system_pods.go:61] "etcd-newest-cni-20220725170926-14919" [7aca802c-2727-4227-9c2c-c969f0a334cf] Running
	I0725 17:10:40.352697   33162 system_pods.go:61] "kube-apiserver-newest-cni-20220725170926-14919" [aa239dc3-e3c0-4446-957a-24cd198cbb3c] Running
	I0725 17:10:40.352701   33162 system_pods.go:61] "kube-controller-manager-newest-cni-20220725170926-14919" [a5400bd1-f383-426d-b6f6-265553b518ea] Running
	I0725 17:10:40.352704   33162 system_pods.go:61] "kube-proxy-thgm5" [2bd1bc65-9c26-4b8e-86b9-3e0bd3599e69] Running
	I0725 17:10:40.352709   33162 system_pods.go:61] "kube-scheduler-newest-cni-20220725170926-14919" [9eaafaf3-71e5-4e23-8f04-0b6c5c8e1357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 17:10:40.352718   33162 system_pods.go:61] "metrics-server-5c6f97fb75-lsp4c" [6751fa1e-1d48-4008-9432-cdac2124118b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:10:40.352722   33162 system_pods.go:61] "storage-provisioner" [50d8c534-72e2-4185-b2d1-5ce19567413e] Running
	I0725 17:10:40.352726   33162 system_pods.go:74] duration metric: took 7.849401ms to wait for pod list to return data ...
	I0725 17:10:40.352733   33162 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:10:40.355773   33162 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:10:40.355787   33162 node_conditions.go:123] node cpu capacity is 6
	I0725 17:10:40.355801   33162 node_conditions.go:105] duration metric: took 3.065104ms to run NodePressure ...
	I0725 17:10:40.355813   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:40.557739   33162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:10:40.604152   33162 ops.go:34] apiserver oom_adj: -16
	I0725 17:10:40.604170   33162 kubeadm.go:630] restartCluster took 10.552861404s
	I0725 17:10:40.604181   33162 kubeadm.go:397] StartCluster complete in 10.58867566s
	I0725 17:10:40.604201   33162 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:40.604299   33162 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:40.605113   33162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:40.609196   33162 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220725170926-14919" rescaled to 1
	I0725 17:10:40.609249   33162 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 17:10:40.609304   33162 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 17:10:40.609312   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:10:40.633783   33162 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633785   33162 addons.go:65] Setting dashboard=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633802   33162 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633805   33162 addons.go:153] Setting addon dashboard=true in "newest-cni-20220725170926-14919"
	I0725 17:10:40.633804   33162 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220725170926-14919"
	I0725 17:10:40.609473   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:40.633819   33162 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220725170926-14919"
	W0725 17:10:40.633826   33162 addons.go:162] addon metrics-server should already be in state true
	W0725 17:10:40.633817   33162 addons.go:162] addon storage-provisioner should already be in state true
	I0725 17:10:40.633679   33162 out.go:177] * Verifying Kubernetes components...
	I0725 17:10:40.633830   33162 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220725170926-14919"
	W0725 17:10:40.633816   33162 addons.go:162] addon dashboard should already be in state true
	I0725 17:10:40.633869   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.691865   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.633872   33162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220725170926-14919"
	I0725 17:10:40.691912   33162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:10:40.633885   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.692553   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692555   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692555   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692649   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.825703   33162 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220725170926-14919"
	W0725 17:10:40.896300   33162 addons.go:162] addon default-storageclass should already be in state true
	I0725 17:10:40.896340   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.826795   33162 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0725 17:10:40.826824   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:40.837858   33162 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 17:10:40.859007   33162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:10:40.896237   33162 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 17:10:40.898501   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.996949   33162 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 17:10:40.939331   33162 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:10:40.976132   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:10:40.997061   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:10:41.035393   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 17:10:41.035423   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 17:10:41.035395   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:10:41.035542   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.035653   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.035666   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.062617   33162 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:10:41.062869   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:41.064899   33162 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:10:41.064918   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:10:41.065018   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.078455   33162 api_server.go:71] duration metric: took 469.124908ms to wait for apiserver process to appear ...
	I0725 17:10:41.078510   33162 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:10:41.078542   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:41.088995   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 200:
	ok
	I0725 17:10:41.090707   33162 api_server.go:140] control plane version: v1.24.3
	I0725 17:10:41.090725   33162 api_server.go:130] duration metric: took 12.204434ms to wait for apiserver health ...
	I0725 17:10:41.090732   33162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:10:41.100882   33162 system_pods.go:59] 9 kube-system pods found
	I0725 17:10:41.100912   33162 system_pods.go:61] "coredns-6d4b75cb6d-dmnl4" [75f79fe8-36b7-421f-bb6c-f04ddc553086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:41.100936   33162 system_pods.go:61] "coredns-6d4b75cb6d-nwgth" [9423c7c6-992c-437c-ad7e-28a2ab1eecdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:41.100947   33162 system_pods.go:61] "etcd-newest-cni-20220725170926-14919" [7aca802c-2727-4227-9c2c-c969f0a334cf] Running
	I0725 17:10:41.100956   33162 system_pods.go:61] "kube-apiserver-newest-cni-20220725170926-14919" [aa239dc3-e3c0-4446-957a-24cd198cbb3c] Running
	I0725 17:10:41.100967   33162 system_pods.go:61] "kube-controller-manager-newest-cni-20220725170926-14919" [a5400bd1-f383-426d-b6f6-265553b518ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 17:10:41.100973   33162 system_pods.go:61] "kube-proxy-thgm5" [2bd1bc65-9c26-4b8e-86b9-3e0bd3599e69] Running
	I0725 17:10:41.100990   33162 system_pods.go:61] "kube-scheduler-newest-cni-20220725170926-14919" [9eaafaf3-71e5-4e23-8f04-0b6c5c8e1357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 17:10:41.100996   33162 system_pods.go:61] "metrics-server-5c6f97fb75-lsp4c" [6751fa1e-1d48-4008-9432-cdac2124118b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:10:41.101006   33162 system_pods.go:61] "storage-provisioner" [50d8c534-72e2-4185-b2d1-5ce19567413e] Running
	I0725 17:10:41.101012   33162 system_pods.go:74] duration metric: took 10.276317ms to wait for pod list to return data ...
	I0725 17:10:41.101018   33162 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:10:41.104454   33162 default_sa.go:45] found service account: "default"
	I0725 17:10:41.104471   33162 default_sa.go:55] duration metric: took 3.4456ms for default service account to be created ...
	I0725 17:10:41.104481   33162 kubeadm.go:572] duration metric: took 495.187773ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0725 17:10:41.104501   33162 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:10:41.109202   33162 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:10:41.109220   33162 node_conditions.go:123] node cpu capacity is 6
	I0725 17:10:41.109230   33162 node_conditions.go:105] duration metric: took 4.725267ms to run NodePressure ...
	I0725 17:10:41.109240   33162 start.go:216] waiting for startup goroutines ...
	I0725 17:10:41.154538   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.155606   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.159137   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.171747   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.277597   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:10:41.277615   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 17:10:41.277691   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 17:10:41.277701   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 17:10:41.288691   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:10:41.300595   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:10:41.300646   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:10:41.305575   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 17:10:41.305588   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 17:10:41.305589   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:10:41.320296   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:10:41.320311   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:10:41.325064   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 17:10:41.325088   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 17:10:41.343386   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:10:41.352726   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 17:10:41.352746   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 17:10:41.429907   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 17:10:41.429923   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 17:10:41.447523   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 17:10:41.447537   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 17:10:41.516266   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 17:10:41.516285   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 17:10:41.535836   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 17:10:41.535850   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 17:10:41.554058   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:10:41.554073   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 17:10:41.572255   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:10:42.151794   33162 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220725170926-14919"
	I0725 17:10:42.288108   33162 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 17:10:42.345471   33162 addons.go:414] enableAddons completed in 1.736158741s
	I0725 17:10:42.381249   33162 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 17:10:42.403296   33162 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220725170926-14919" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-07-26 00:10:25 UTC, end at Tue 2022-07-26 00:11:20 UTC. --
	Jul 26 00:10:28 newest-cni-20220725170926-14919 systemd[1]: Starting Docker Application Container Engine...
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.806373383Z" level=info msg="Starting up"
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.808377382Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.808409623Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.808428326Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.808437250Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.809619578Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.809651448Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.809671004Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.809684335Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.813168334Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.818255370Z" level=info msg="Loading containers: start."
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.914841371Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.948091781Z" level=info msg="Loading containers: done."
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.958250938Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.958343386Z" level=info msg="Daemon has completed initialization"
	Jul 26 00:10:28 newest-cni-20220725170926-14919 systemd[1]: Started Docker Application Container Engine.
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.981294704Z" level=info msg="API listen on [::]:2376"
	Jul 26 00:10:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:28.990425626Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 26 00:10:40 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:40.647220011Z" level=info msg="ignoring event" container=4ca1bdea0fdc69c7173682b5b07c59e7659b7a8bd4d6bd97da3e260529902e7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:41 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:41.180559056Z" level=info msg="ignoring event" container=4453b77131a787200fe4628ba95c4651cae4b07ee0e7d1dee55830d11a39f504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:42 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:42.653384484Z" level=info msg="ignoring event" container=d84f255b5da60225a20db24129e9ed5389967f70e6e0882758969aa5f15e755b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:42 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:42.726537855Z" level=info msg="ignoring event" container=6521a581957e6bbfe598aff584db6c5364a5414bf399606d5e8cad159168a004 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:43 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:43.524620787Z" level=info msg="ignoring event" container=195368b19c556368665cfea91eb39a0328b0d6837c63098f754e07ea3a2ebe95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:43 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:43.546319588Z" level=info msg="ignoring event" container=a443999ee44bdf2fdb6b53f1800e217508044feb54bc9bd869b0329bae99b7fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	808bbbe79ed5e       6e38f40d628db       40 seconds ago       Running             storage-provisioner       1                   e6aec7f6992b8
	42627e85bae2f       2ae1ba6417cbc       41 seconds ago       Running             kube-proxy                1                   92e03cae9e64e
	eccbd318b51f9       586c112956dfc       45 seconds ago       Running             kube-controller-manager   1                   4b390efc25066
	608941598c4c9       d521dd763e2e3       45 seconds ago       Running             kube-apiserver            1                   59c6e5e4348f7
	cd7d7c0a4b6e5       aebe758cef4cd       45 seconds ago       Running             etcd                      1                   f1808f8313c96
	4ae07a81558e3       3a5aa3a515f5d       45 seconds ago       Running             kube-scheduler            1                   b1eb14474d4fb
	e3093a0bea734       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   a5c118b426c2d
	eb8d778947323       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                0                   c00a5e1122630
	54430765218a2       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   22c1ccaaf65af
	264f85de3b55e       586c112956dfc       About a minute ago   Exited              kube-controller-manager   0                   0c966b0d8030f
	1ae34c8051d51       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            0                   3a3b080204591
	7e75f9965e1a6       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            0                   caf103a64c255
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220725170926-14919
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220725170926-14919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b
	                    minikube.k8s.io/name=newest-cni-20220725170926-14919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T17_09_54_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Jul 2022 00:09:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220725170926-14919
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Jul 2022 00:11:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Jul 2022 00:11:17 +0000   Tue, 26 Jul 2022 00:09:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Jul 2022 00:11:17 +0000   Tue, 26 Jul 2022 00:09:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Jul 2022 00:11:17 +0000   Tue, 26 Jul 2022 00:09:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Jul 2022 00:11:17 +0000   Tue, 26 Jul 2022 00:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20220725170926-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                0b73bc9f-1df2-4cb3-ad1c-9ce261e8373c
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-nwgth                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     72s
	  kube-system                 etcd-newest-cni-20220725170926-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kube-apiserver-newest-cni-20220725170926-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-newest-cni-20220725170926-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-thgm5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-newest-cni-20220725170926-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 metrics-server-5c6f97fb75-lsp4c                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-f9vrj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-qnd8s                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 71s                kube-proxy       
	  Normal  NodeHasSufficientMemory  97s (x4 over 97s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x4 over 97s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x4 over 97s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s                kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           73s                node-controller  Node newest-cni-20220725170926-14919 event: Registered Node newest-cni-20220725170926-14919 in Controller
	  Normal  NodeAllocatableEnforced  46s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    46s (x5 over 46s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x5 over 46s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  46s (x5 over 46s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet          Node newest-cni-20220725170926-14919 status is now: NodeNotReady
	  Normal  NodeReady                3s                 kubelet          Node newest-cni-20220725170926-14919 status is now: NodeReady
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-20220725170926-14919 event: Registered Node newest-cni-20220725170926-14919 in Controller
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [54430765218a] <==
	* {"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220725170926-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:09:49.396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-26T00:09:49.396Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-26T00:09:49.396Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-26T00:09:49.397Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-26T00:10:12.031Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-26T00:10:12.031Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220725170926-14919","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/07/26 00:10:12 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/26 00:10:12 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-26T00:10:12.041Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-07-26T00:10:12.043Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:10:12.043Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:10:12.043Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220725170926-14919","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [cd7d7c0a4b6e] <==
	* {"level":"info","ts":"2022-07-26T00:10:35.378Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-26T00:10:35.378Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-26T00:10:35.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-26T00:10:35.378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-26T00:10:35.378Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:10:35.427Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:10:35.430Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-26T00:10:35.430Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-26T00:10:35.431Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-26T00:10:35.431Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:10:35.431Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-26T00:10:37.046Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220725170926-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-26T00:10:37.046Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:10:37.046Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:10:37.047Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-26T00:10:37.047Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-26T00:10:37.048Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-26T00:10:37.048Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  00:11:21 up  1:17,  0 users,  load average: 1.25, 1.01, 1.01
	Linux newest-cni-20220725170926-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1ae34c8051d5] <==
	* W0726 00:10:13.036541       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036544       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036565       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036571       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036588       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036589       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036602       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036613       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036614       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036565       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036631       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036643       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036651       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036664       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036668       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036682       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036683       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036687       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036693       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036706       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036752       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036771       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036789       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036915       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036929       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [608941598c4c] <==
	* I0726 00:10:38.858848       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0726 00:10:38.858852       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0726 00:10:38.906530       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0726 00:10:38.920582       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0726 00:10:39.545723       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0726 00:10:39.763394       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0726 00:10:39.926282       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:10:39.926304       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:10:39.926310       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0726 00:10:39.926347       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:10:39.926379       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:10:39.927513       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0726 00:10:40.112677       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0726 00:10:40.451373       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0726 00:10:40.466448       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0726 00:10:40.527879       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0726 00:10:40.541711       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0726 00:10:40.547449       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0726 00:10:42.049725       1 controller.go:611] quota admission added evaluator for: namespaces
	I0726 00:10:42.247266       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.148.216]
	I0726 00:10:42.258091       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.107.175.56]
	I0726 00:11:17.320510       1 controller.go:611] quota admission added evaluator for: endpoints
	I0726 00:11:18.399010       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0726 00:11:18.548520       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [264f85de3b55] <==
	* I0726 00:10:07.914055       1 shared_informer.go:262] Caches are synced for job
	I0726 00:10:07.914086       1 shared_informer.go:262] Caches are synced for attach detach
	I0726 00:10:07.914305       1 shared_informer.go:262] Caches are synced for PVC protection
	I0726 00:10:07.915641       1 shared_informer.go:262] Caches are synced for persistent volume
	I0726 00:10:07.972311       1 shared_informer.go:262] Caches are synced for resource quota
	I0726 00:10:07.990545       1 shared_informer.go:262] Caches are synced for taint
	I0726 00:10:07.990701       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0726 00:10:07.990869       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220725170926-14919. Assuming now as a timestamp.
	I0726 00:10:07.990991       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0726 00:10:07.990989       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0726 00:10:07.991269       1 event.go:294] "Event occurred" object="newest-cni-20220725170926-14919" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220725170926-14919 event: Registered Node newest-cni-20220725170926-14919 in Controller"
	I0726 00:10:08.015359       1 shared_informer.go:262] Caches are synced for endpoint
	I0726 00:10:08.015435       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0726 00:10:08.018502       1 shared_informer.go:262] Caches are synced for resource quota
	I0726 00:10:08.434369       1 shared_informer.go:262] Caches are synced for garbage collector
	I0726 00:10:08.465396       1 shared_informer.go:262] Caches are synced for garbage collector
	I0726 00:10:08.465439       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0726 00:10:08.619948       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-thgm5"
	I0726 00:10:08.668099       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0726 00:10:08.818220       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dmnl4"
	I0726 00:10:08.823546       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nwgth"
	I0726 00:10:08.865894       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0726 00:10:08.931768       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-dmnl4"
	I0726 00:10:11.235602       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0726 00:10:11.241626       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-lsp4c"
	
	* 
	* ==> kube-controller-manager [eccbd318b51f] <==
	* I0726 00:11:18.221650       1 shared_informer.go:262] Caches are synced for attach detach
	I0726 00:11:18.222661       1 shared_informer.go:262] Caches are synced for GC
	I0726 00:11:18.225486       1 shared_informer.go:262] Caches are synced for job
	I0726 00:11:18.228807       1 shared_informer.go:262] Caches are synced for taint
	I0726 00:11:18.228890       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0726 00:11:18.228962       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220725170926-14919. Assuming now as a timestamp.
	I0726 00:11:18.228989       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0726 00:11:18.229017       1 event.go:294] "Event occurred" object="newest-cni-20220725170926-14919" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220725170926-14919 event: Registered Node newest-cni-20220725170926-14919 in Controller"
	I0726 00:11:18.229038       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0726 00:11:18.230650       1 shared_informer.go:262] Caches are synced for ephemeral
	I0726 00:11:18.287050       1 shared_informer.go:262] Caches are synced for PVC protection
	I0726 00:11:18.295799       1 shared_informer.go:262] Caches are synced for deployment
	I0726 00:11:18.296612       1 shared_informer.go:262] Caches are synced for persistent volume
	I0726 00:11:18.300007       1 shared_informer.go:262] Caches are synced for daemon sets
	I0726 00:11:18.308597       1 shared_informer.go:262] Caches are synced for endpoint
	I0726 00:11:18.311172       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0726 00:11:18.396024       1 shared_informer.go:262] Caches are synced for resource quota
	I0726 00:11:18.399423       1 shared_informer.go:262] Caches are synced for resource quota
	I0726 00:11:18.551728       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0726 00:11:18.553921       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0726 00:11:18.702362       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-qnd8s"
	I0726 00:11:18.705208       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-f9vrj"
	I0726 00:11:18.820024       1 shared_informer.go:262] Caches are synced for garbage collector
	I0726 00:11:18.899968       1 shared_informer.go:262] Caches are synced for garbage collector
	I0726 00:11:18.900002       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [42627e85bae2] <==
	* I0726 00:10:40.093380       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0726 00:10:40.093436       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0726 00:10:40.093456       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0726 00:10:40.109947       1 server_others.go:206] "Using iptables Proxier"
	I0726 00:10:40.110029       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0726 00:10:40.110038       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0726 00:10:40.110047       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0726 00:10:40.110069       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:10:40.110231       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:10:40.110362       1 server.go:661] "Version info" version="v1.24.3"
	I0726 00:10:40.110388       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:10:40.110880       1 config.go:317] "Starting service config controller"
	I0726 00:10:40.110935       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0726 00:10:40.111192       1 config.go:444] "Starting node config controller"
	I0726 00:10:40.111197       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0726 00:10:40.111214       1 config.go:226] "Starting endpoint slice config controller"
	I0726 00:10:40.111217       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0726 00:10:40.211641       1 shared_informer.go:262] Caches are synced for service config
	I0726 00:10:40.211674       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0726 00:10:40.211731       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [eb8d77894732] <==
	* I0726 00:10:09.139940       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0726 00:10:09.139992       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0726 00:10:09.140012       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0726 00:10:09.167842       1 server_others.go:206] "Using iptables Proxier"
	I0726 00:10:09.167886       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0726 00:10:09.167893       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0726 00:10:09.167903       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0726 00:10:09.168144       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:10:09.168480       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:10:09.169215       1 server.go:661] "Version info" version="v1.24.3"
	I0726 00:10:09.169298       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:10:09.169749       1 config.go:317] "Starting service config controller"
	I0726 00:10:09.169802       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0726 00:10:09.170316       1 config.go:444] "Starting node config controller"
	I0726 00:10:09.170401       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0726 00:10:09.170436       1 config.go:226] "Starting endpoint slice config controller"
	I0726 00:10:09.170564       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0726 00:10:09.270361       1 shared_informer.go:262] Caches are synced for service config
	I0726 00:10:09.271609       1 shared_informer.go:262] Caches are synced for node config
	I0726 00:10:09.274910       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [4ae07a81558e] <==
	* W0726 00:10:35.437985       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0726 00:10:35.949769       1 serving.go:348] Generated self-signed cert in-memory
	W0726 00:10:38.811321       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0726 00:10:38.811426       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0726 00:10:38.811434       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0726 00:10:38.811440       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0726 00:10:38.827253       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0726 00:10:38.827329       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:10:38.829104       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0726 00:10:38.829609       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0726 00:10:38.829628       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0726 00:10:38.831475       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0726 00:10:38.932990       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [7e75f9965e1a] <==
	* E0726 00:09:51.782153       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0726 00:09:51.782230       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0726 00:09:51.782294       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0726 00:09:51.782817       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:09:51.782898       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:09:51.782933       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0726 00:09:51.783020       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0726 00:09:52.599735       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0726 00:09:52.599847       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0726 00:09:52.614158       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0726 00:09:52.614187       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0726 00:09:52.728213       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:09:52.729738       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:09:52.729692       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0726 00:09:52.729931       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0726 00:09:52.739693       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0726 00:09:52.739727       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0726 00:09:52.781650       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0726 00:09:52.781739       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0726 00:09:52.901090       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0726 00:09:52.901262       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0726 00:09:55.378413       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0726 00:10:12.026583       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0726 00:10:12.026606       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0726 00:10:12.026971       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-07-26 00:10:25 UTC, end at Tue 2022-07-26 00:11:23 UTC. --
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         ]
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-f9vrj"
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:21.897897    3620 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard(583917cc-373c-4d5a-8d68-6972ef0a0625)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard(583917cc-373c-4d5a-8d68-6972ef0a0625)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"e392cf371d3e3cf21ac32f2eced22a78948eb017bfc4b6256bdeff2a801356d9\\\" network for pod \\\"dashboard-metrics-scraper-dffd48c4c-f9vrj\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"e392cf371d3e3cf21ac32f2eced22a78948eb017bfc4b6256bdeff2a801356d9\\\" network for pod \\\"dashboar
d-metrics-scraper-dffd48c4c-f9vrj\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-a2131479827025bc50211745 -m comment --comment name: \\\"crio\\\" id: \\\"e392cf371d3e3cf21ac32f2eced22a78948eb017bfc4b6256bdeff2a801356d9\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-a2131479827025bc50211745':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-f9vrj" podUID=583917cc-373c-4d5a-8d68-6972ef0a0625
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:21.901500    3620 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         rpc error: code = Unknown desc = [failed to set up sandbox container "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" network for pod "kubernetes-dashboard-5fd5574d9f-qnd8s": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" network for pod "kubernetes-dashboard-5fd5574d9f-qnd8s": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-a8f6730852c3fe03888017bf -m comment --comment name: "crio" id: "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-a8f6730852c
3fe03888017bf':No such file or directory
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         ]
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:  >
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:21.901585    3620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         rpc error: code = Unknown desc = [failed to set up sandbox container "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" network for pod "kubernetes-dashboard-5fd5574d9f-qnd8s": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" network for pod "kubernetes-dashboard-5fd5574d9f-qnd8s": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-a8f6730852c3fe03888017bf -m comment --comment name: "crio" id: "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-a8f6730852c
3fe03888017bf':No such file or directory
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         ]
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:  > pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-qnd8s"
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:21.901606    3620 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         rpc error: code = Unknown desc = [failed to set up sandbox container "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" network for pod "kubernetes-dashboard-5fd5574d9f-qnd8s": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" network for pod "kubernetes-dashboard-5fd5574d9f-qnd8s": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-a8f6730852c3fe03888017bf -m comment --comment name: "crio" id: "b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-a8f6730852c
3fe03888017bf':No such file or directory
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:         ]
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]:  > pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-qnd8s"
	Jul 26 00:11:21 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:21.901686    3620 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard(98406fe6-1643-44b0-9305-f869d5dba210)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard(98406fe6-1643-44b0-9305-f869d5dba210)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3\\\" network for pod \\\"kubernetes-dashboard-5fd5574d9f-qnd8s\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3\\\" network for pod \\\"kubernetes-dashboard-5fd
5574d9f-qnd8s\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-5fd5574d9f-qnd8s_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-a8f6730852c3fe03888017bf -m comment --comment name: \\\"crio\\\" id: \\\"b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-a8f6730852c3fe03888017bf':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-qnd8s" podUID=98406fe6-1643-44b0-9305-f869d5dba210
	Jul 26 00:11:22 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:22.054018    3620 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-newest-cni-20220725170926-14919\" already exists" pod="kube-system/kube-controller-manager-newest-cni-20220725170926-14919"
	Jul 26 00:11:22 newest-cni-20220725170926-14919 kubelet[3620]: I0726 00:11:22.613081    3620 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3"
	Jul 26 00:11:22 newest-cni-20220725170926-14919 kubelet[3620]: I0726 00:11:22.618518    3620 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="6e0a5c0d14245dc4e5def32fbd6bebc2b28acfbcc35cb087e3bf4c1b54832b9f"
	
	* 
	* ==> storage-provisioner [808bbbe79ed5] <==
	* I0726 00:10:41.221365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0726 00:10:41.232689       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0726 00:10:41.232781       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0726 00:11:17.324569       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0726 00:11:17.324713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725170926-14919_101de8b1-4e96-49cb-bbf2-b75bfdf53fd7!
	I0726 00:11:17.324744       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6776a20a-b9cc-4a7f-abca-7da433162f63", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220725170926-14919_101de8b1-4e96-49cb-bbf2-b75bfdf53fd7 became leader
	I0726 00:11:17.425151       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725170926-14919_101de8b1-4e96-49cb-bbf2-b75bfdf53fd7!
	
	* 
	* ==> storage-provisioner [e3093a0bea73] <==
	* I0726 00:10:10.996092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0726 00:10:11.007658       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0726 00:10:11.007708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0726 00:10:11.028484       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0726 00:10:11.028674       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725170926-14919_69baca53-05f8-4024-b3bd-3d34ac15026e!
	I0726 00:10:11.028675       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6776a20a-b9cc-4a7f-abca-7da433162f63", APIVersion:"v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220725170926-14919_69baca53-05f8-4024-b3bd-3d34ac15026e became leader
	I0726 00:10:11.129208       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725170926-14919_69baca53-05f8-4024-b3bd-3d34ac15026e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220725170926-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220725170926-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.158431903s)
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-nwgth metrics-server-5c6f97fb75-lsp4c dashboard-metrics-scraper-dffd48c4c-f9vrj kubernetes-dashboard-5fd5574d9f-qnd8s
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220725170926-14919 describe pod coredns-6d4b75cb6d-nwgth metrics-server-5c6f97fb75-lsp4c dashboard-metrics-scraper-dffd48c4c-f9vrj kubernetes-dashboard-5fd5574d9f-qnd8s
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220725170926-14919 describe pod coredns-6d4b75cb6d-nwgth metrics-server-5c6f97fb75-lsp4c dashboard-metrics-scraper-dffd48c4c-f9vrj kubernetes-dashboard-5fd5574d9f-qnd8s: exit status 1 (282.979854ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-nwgth" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-lsp4c" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-f9vrj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-qnd8s" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220725170926-14919 describe pod coredns-6d4b75cb6d-nwgth metrics-server-5c6f97fb75-lsp4c dashboard-metrics-scraper-dffd48c4c-f9vrj kubernetes-dashboard-5fd5574d9f-qnd8s: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220725170926-14919
helpers_test.go:235: (dbg) docker inspect newest-cni-20220725170926-14919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21",
	        "Created": "2022-07-26T00:09:33.395820497Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311558,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-26T00:10:25.67552367Z",
	            "FinishedAt": "2022-07-26T00:10:23.755408995Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21/hostname",
	        "HostsPath": "/var/lib/docker/containers/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21/hosts",
	        "LogPath": "/var/lib/docker/containers/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21/70670a22d2f66f38956f55a98b8c3648f2f788d0320bc060b498ba447dbdbb21-json.log",
	        "Name": "/newest-cni-20220725170926-14919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220725170926-14919:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220725170926-14919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9988b34e684b73313054847e6562f040595dcfd62c5c949651e81ffcf9758814-init/diff:/var/lib/docker/overlay2/8c24b3eef47c80f0f1c7ecd431fc4ced5f467ae6db9b9e15507366a887a16ed3/diff:/var/lib/docker/overlay2/1b13d21ea451468afe209b1a9bc9df23c784fe766b47a4d2c6b05771b3799217/diff:/var/lib/docker/overlay2/4707d11e07cb14467c80db9fd5e705fd971fe8dff1b1a50631c7c397c2ded00e/diff:/var/lib/docker/overlay2/55106e26e284037bfbb01e36e74e1dc2843604ee0df9e1f3b9d7404173bce2c7/diff:/var/lib/docker/overlay2/b74a4243ccfd0f85c23f3f63b818e16338778001142242810ba6dcd43a8acbd3/diff:/var/lib/docker/overlay2/40567925ce3f8310adb4e84ed27150dcfe6d7a4db7502b89c6135e82fb0d5370/diff:/var/lib/docker/overlay2/be304c5407af1d541d260988e5bb5ebcf56e95809db52c6cae56b59bf40a882a/diff:/var/lib/docker/overlay2/ee25820f5a961c0275b70d9543c62671de190985358a6c691479a4635c953cae/diff:/var/lib/docker/overlay2/cceefc5ac9abbaf9eae8333f659ffe45487e761d55acd59184b60db5e188e624/diff:/var/lib/docker/overlay2/476c70
0ef9d2925715c49427a6eba65a007cf487f40bd93d7b1abfc3da1b61bb/diff:/var/lib/docker/overlay2/d2ab89e790951a9a32019722b341819767da138caefe3df8f62b55b9e8e5010f/diff:/var/lib/docker/overlay2/d8859699ea67a49a1820ca35ba701a745c6705d05f31887dad6eb0230848c57b/diff:/var/lib/docker/overlay2/fcc2d4afadec8f48bffbd14e51b5d12833885b04baadc27b22a9df2fad3499da/diff:/var/lib/docker/overlay2/55fc6531ed6da13485b66937ebcdca76e490ab1f3646b091d8dede2fcdd3a346/diff:/var/lib/docker/overlay2/2d9b9235b115f09d9808bc0b097875a3bb5deba25a946f4317426bce8ba44f30/diff:/var/lib/docker/overlay2/0ddb50127acbbe1c0cd98d2127d38e8f16d399dd88822ec2947750d9a4c07838/diff:/var/lib/docker/overlay2/b1a5a3e9f71556a8e482b985fb477ce882b1d012bf7be9cb5145427cc778a11b/diff:/var/lib/docker/overlay2/3b4d0a1addb375e5599767278ab9fbab6aca53fa23b439beee3a6595a886aa7f/diff:/var/lib/docker/overlay2/6929688577f548f8ddfd5f33c02a81568e93fb3423bbac449561d73b976ee5eb/diff:/var/lib/docker/overlay2/d88d09034e9f9d85ca61b7dcab26b16e4989acaf53af7f5f5f85820a777b0702/diff:/var/lib/d
ocker/overlay2/bbd98fa65a1a543dafee7584755a441fe27533744e7483d4cd3ac2f5edc2589f/diff:/var/lib/docker/overlay2/643ff621d673553cfd9bf1f011c4d135cccb15ddfb0591d701ce396aea54fb79/diff:/var/lib/docker/overlay2/e0969fb7c878c5000fecdc7ba86eab53b8e95ccc25374fda67368db468007e17/diff:/var/lib/docker/overlay2/3052ace23d9ce56505c24df0928b62e74927fc0b2212ece22a1253218759b803/diff:/var/lib/docker/overlay2/03ec01fe8cbf7a6c5232ceb75a3768fd37b829401c006a9a1451d350e71a27b3/diff:/var/lib/docker/overlay2/712f64ccf9f2f3e7d7cb87d06c6cc2e8567099d842b20fbb94d9b1e79694342d/diff:/var/lib/docker/overlay2/ab2b3752b20818866edacb9bf7d0d0965815cb0742628f75d91d85a020c2f1b8/diff:/var/lib/docker/overlay2/21494fe93eee8bbfe09ecd6c6a596cf45c3947085c99f221207936547ea67ca9/diff:/var/lib/docker/overlay2/97063796233cccc3f6decef047bf93573531430d26fad1ac01667a8bbf03aa16/diff:/var/lib/docker/overlay2/78c3f52b1cb607edf4686b5f18658408e1620d2126b67d29b381d2f79ddcd3a5/diff:/var/lib/docker/overlay2/31d59cc979a6585e67e93045d936dda4da395aff1d7ca127697357a0a70
0e9de/diff:/var/lib/docker/overlay2/265847d373e6e0b3e8ec58d1fe1b4233df0c6d82714e5feb90eaf9ae8fd3b4b9/diff:/var/lib/docker/overlay2/e70d9e2b9feff2fb0c862a7365a93b6b7df8f0a57d2968ef41477d99eb3ae917/diff:/var/lib/docker/overlay2/c4f0119620e195fc293916149bc10315ba43bb897dae4e737fb68e2c302eda0a/diff:/var/lib/docker/overlay2/d3d041b45f435899d1cc2475644014c810bb692497d6c85a78f162ca17a9a96e/diff:/var/lib/docker/overlay2/e6c8eac01cbf493968305650e82f20892777ab3681b2783e64005b1fa34495ff/diff:/var/lib/docker/overlay2/bb5531f8ddef5b5f63c98cabf77cd21ae94859aecde256b35ecb339914c657de/diff:/var/lib/docker/overlay2/a747c36582c99af09553f307a3b9483c4ef35006fd456f525fd4ccba6280de59/diff:/var/lib/docker/overlay2/9a1c04cf5350a9de6d7e75995e6f55e40a0403b24cd2251640e43f35ad66294d/diff:/var/lib/docker/overlay2/4f06033da9f3778ae16ce3631a0f071407e6eb2b60b33ff3e383b9999fcfad02/diff:/var/lib/docker/overlay2/a06eabc7f3f9dd8aa35e2fabe565c5e209535101f980c9709a2fb605b96cd586/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9988b34e684b73313054847e6562f040595dcfd62c5c949651e81ffcf9758814/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9988b34e684b73313054847e6562f040595dcfd62c5c949651e81ffcf9758814/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9988b34e684b73313054847e6562f040595dcfd62c5c949651e81ffcf9758814/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220725170926-14919",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220725170926-14919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220725170926-14919",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220725170926-14919",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220725170926-14919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "412c4e11789871b4076bac2436085935ccc7d29b3d5994ca0d062757f0371991",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52976"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52979"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52980"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/412c4e117898",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220725170926-14919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "70670a22d2f6",
	                        "newest-cni-20220725170926-14919"
	                    ],
	                    "NetworkID": "6ab3bfd3244da82ffafcd9e785631e57ff855e44a8e471d53739ac11b8e548ef",
	                    "EndpointID": "d08e6ac741221eb652f059400fbb2d5dff6b744e1ae6f2b1f4f34bde073b4428",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220725170926-14919 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220725170926-14919 logs -n 25: (5.823552581s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 16:56 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:01 PDT | 25 Jul 22 17:01 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725165448-14919                | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | embed-certs-20220725165448-14919                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220725170207-14919      | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | disable-driver-mounts-20220725170207-14919                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:02 PDT | 25 Jul 22 17:02 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:03 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:03 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:08 PDT | 25 Jul 22 17:08 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725170207-14919 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:09 PDT |
	|         | default-k8s-different-port-20220725170207-14919            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725170926-14919 --memory=2200           | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:09 PDT | 25 Jul 22 17:10 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725170926-14919 --memory=2200           | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:10 PDT | 25 Jul 22 17:10 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220725170926-14919                 | jenkins | v1.26.0 | 25 Jul 22 17:11 PDT | 25 Jul 22 17:11 PDT |
	|         | newest-cni-20220725170926-14919                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 17:10:24
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:10:24.417864   33162 out.go:296] Setting OutFile to fd 1 ...
	I0725 17:10:24.418039   33162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:10:24.418045   33162 out.go:309] Setting ErrFile to fd 2...
	I0725 17:10:24.418049   33162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 17:10:24.418146   33162 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 17:10:24.418606   33162 out.go:303] Setting JSON to false
	I0725 17:10:24.433673   33162 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11147,"bootTime":1658783077,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 17:10:24.433808   33162 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 17:10:24.455637   33162 out.go:177] * [newest-cni-20220725170926-14919] minikube v1.26.0 on Darwin 12.5
	I0725 17:10:24.497929   33162 notify.go:193] Checking for updates...
	I0725 17:10:24.519568   33162 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 17:10:24.540666   33162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:24.561854   33162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 17:10:24.583713   33162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:10:24.604874   33162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 17:10:24.627553   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:24.628229   33162 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 17:10:24.697874   33162 docker.go:137] docker version: linux-20.10.17
	I0725 17:10:24.698008   33162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:10:24.830389   33162 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:10:24.768579678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:10:24.873988   33162 out.go:177] * Using the docker driver based on existing profile
	I0725 17:10:24.895168   33162 start.go:284] selected driver: docker
	I0725 17:10:24.895244   33162 start.go:808] validating driver "docker" against &{Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:24.895483   33162 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:10:24.900000   33162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 17:10:25.035516   33162 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-26 00:10:24.970402659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 17:10:25.035678   33162 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 17:10:25.035704   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:25.035716   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:25.035730   33162 start_flags.go:310] config:
	{Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:25.056836   33162 out.go:177] * Starting control plane node newest-cni-20220725170926-14919 in cluster newest-cni-20220725170926-14919
	I0725 17:10:25.077918   33162 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 17:10:25.098824   33162 out.go:177] * Pulling base image ...
	I0725 17:10:25.141055   33162 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:10:25.141089   33162 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 17:10:25.141143   33162 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0725 17:10:25.141171   33162 cache.go:57] Caching tarball of preloaded images
	I0725 17:10:25.141419   33162 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 17:10:25.142092   33162 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0725 17:10:25.142479   33162 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/config.json ...
	I0725 17:10:25.206257   33162 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 17:10:25.206278   33162 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 17:10:25.206290   33162 cache.go:208] Successfully downloaded all kic artifacts
	I0725 17:10:25.206376   33162 start.go:370] acquiring machines lock for newest-cni-20220725170926-14919: {Name:mk0f9a30538ef211b73bc7dbc2b91673075b0931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:10:25.206461   33162 start.go:374] acquired machines lock for "newest-cni-20220725170926-14919" in 65.585µs
	I0725 17:10:25.206494   33162 start.go:95] Skipping create...Using existing machine configuration
	I0725 17:10:25.206504   33162 fix.go:55] fixHost starting: 
	I0725 17:10:25.206735   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:25.274150   33162 fix.go:103] recreateIfNeeded on newest-cni-20220725170926-14919: state=Stopped err=<nil>
	W0725 17:10:25.274212   33162 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 17:10:25.296502   33162 out.go:177] * Restarting existing docker container for "newest-cni-20220725170926-14919" ...
	I0725 17:10:25.322901   33162 cli_runner.go:164] Run: docker start newest-cni-20220725170926-14919
	I0725 17:10:25.670582   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:25.747051   33162 kic.go:415] container "newest-cni-20220725170926-14919" state is running.
	I0725 17:10:25.747947   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:25.835124   33162 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/config.json ...
	I0725 17:10:25.835685   33162 machine.go:88] provisioning docker machine ...
	I0725 17:10:25.835720   33162 ubuntu.go:169] provisioning hostname "newest-cni-20220725170926-14919"
	I0725 17:10:25.835849   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:25.920990   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:25.921209   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:25.921222   33162 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220725170926-14919 && echo "newest-cni-20220725170926-14919" | sudo tee /etc/hostname
	I0725 17:10:26.056106   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220725170926-14919
	
	I0725 17:10:26.056189   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:26.132180   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:26.132352   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:26.132376   33162 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220725170926-14919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220725170926-14919/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220725170926-14919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:10:26.253967   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:10:26.253992   33162 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube}
	I0725 17:10:26.254014   33162 ubuntu.go:177] setting up certificates
	I0725 17:10:26.254022   33162 provision.go:83] configureAuth start
	I0725 17:10:26.254089   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:26.331695   33162 provision.go:138] copyHostCerts
	I0725 17:10:26.331779   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem, removing ...
	I0725 17:10:26.331794   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem
	I0725 17:10:26.331920   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.pem (1082 bytes)
	I0725 17:10:26.332199   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem, removing ...
	I0725 17:10:26.332208   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem
	I0725 17:10:26.332337   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cert.pem (1123 bytes)
	I0725 17:10:26.332509   33162 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem, removing ...
	I0725 17:10:26.332515   33162 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem
	I0725 17:10:26.332575   33162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/key.pem (1675 bytes)
	I0725 17:10:26.332689   33162 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220725170926-14919 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220725170926-14919]
	I0725 17:10:26.717276   33162 provision.go:172] copyRemoteCerts
	I0725 17:10:26.717338   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:10:26.717382   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:26.790688   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:26.880826   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0725 17:10:26.897391   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 17:10:26.915109   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 17:10:26.931087   33162 provision.go:86] duration metric: configureAuth took 677.048653ms
	I0725 17:10:26.931102   33162 ubuntu.go:193] setting minikube options for container-runtime
	I0725 17:10:26.931259   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:26.931314   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.005264   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.005412   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.005427   33162 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 17:10:27.129482   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 17:10:27.129493   33162 ubuntu.go:71] root file system type: overlay
	I0725 17:10:27.129635   33162 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 17:10:27.129721   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.201716   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.201890   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.201948   33162 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 17:10:27.330950   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 17:10:27.331083   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.403684   33162 main.go:134] libmachine: Using SSH client type: native
	I0725 17:10:27.403852   33162 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52976 <nil> <nil>}
	I0725 17:10:27.403866   33162 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 17:10:27.528530   33162 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:10:27.528549   33162 machine.go:91] provisioned docker machine in 1.692843192s
	I0725 17:10:27.528563   33162 start.go:307] post-start starting for "newest-cni-20220725170926-14919" (driver="docker")
	I0725 17:10:27.528570   33162 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:10:27.528633   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:10:27.528689   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.600159   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:27.688418   33162 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:10:27.691836   33162 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 17:10:27.691852   33162 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 17:10:27.691859   33162 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 17:10:27.691864   33162 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 17:10:27.691873   33162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/addons for local assets ...
	I0725 17:10:27.691979   33162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files for local assets ...
	I0725 17:10:27.692128   33162 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem -> 149192.pem in /etc/ssl/certs
	I0725 17:10:27.692274   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:10:27.699346   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:10:27.715708   33162 start.go:310] post-start completed in 187.135858ms
	I0725 17:10:27.715797   33162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:10:27.715855   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.789256   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:27.875730   33162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 17:10:27.880586   33162 fix.go:57] fixHost completed within 2.674056608s
	I0725 17:10:27.880604   33162 start.go:82] releasing machines lock for "newest-cni-20220725170926-14919", held for 2.674115777s
	I0725 17:10:27.880683   33162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725170926-14919
	I0725 17:10:27.952738   33162 ssh_runner.go:195] Run: systemctl --version
	I0725 17:10:27.952766   33162 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 17:10:27.952822   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:27.952837   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:28.035925   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:28.037689   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:28.122072   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 17:10:28.343268   33162 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0725 17:10:28.355683   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:28.420676   33162 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0725 17:10:28.498590   33162 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 17:10:28.508832   33162 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 17:10:28.508892   33162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 17:10:28.518005   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:10:28.530341   33162 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 17:10:28.596050   33162 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 17:10:28.659049   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:28.725708   33162 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 17:10:28.962213   33162 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 17:10:29.032359   33162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:10:29.104371   33162 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 17:10:29.114153   33162 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 17:10:29.114219   33162 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 17:10:29.117753   33162 start.go:471] Will wait 60s for crictl version
	I0725 17:10:29.117794   33162 ssh_runner.go:195] Run: sudo crictl version
	I0725 17:10:29.147467   33162 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 17:10:29.147535   33162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:10:29.184126   33162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 17:10:29.262105   33162 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0725 17:10:29.262296   33162 cli_runner.go:164] Run: docker exec -t newest-cni-20220725170926-14919 dig +short host.docker.internal
	I0725 17:10:29.395521   33162 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 17:10:29.395785   33162 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 17:10:29.399754   33162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:10:29.409524   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:29.503728   33162 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0725 17:10:29.524653   33162 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0725 17:10:29.524731   33162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:10:29.558092   33162 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 17:10:29.558111   33162 docker.go:542] Images already preloaded, skipping extraction
	I0725 17:10:29.558184   33162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 17:10:29.587899   33162 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 17:10:29.587918   33162 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:10:29.588031   33162 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 17:10:29.663722   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:29.663735   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:29.663750   33162 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0725 17:10:29.663767   33162 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220725170926-14919 NodeName:newest-cni-20220725170926-14919 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 17:10:29.663896   33162 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220725170926-14919"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:10:29.664003   33162 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220725170926-14919 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 17:10:29.664069   33162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0725 17:10:29.671642   33162 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:10:29.671692   33162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 17:10:29.678773   33162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0725 17:10:29.691506   33162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:10:29.704307   33162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0725 17:10:29.717632   33162 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 17:10:29.721370   33162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:10:29.730835   33162 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919 for IP: 192.168.76.2
	I0725 17:10:29.730956   33162 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key
	I0725 17:10:29.731012   33162 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key
	I0725 17:10:29.731101   33162 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/client.key
	I0725 17:10:29.731184   33162 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.key.31bdca25
	I0725 17:10:29.731238   33162 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.key
	I0725 17:10:29.731449   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem (1338 bytes)
	W0725 17:10:29.731486   33162 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919_empty.pem, impossibly tiny 0 bytes
	I0725 17:10:29.731499   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:10:29.731529   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/ca.pem (1082 bytes)
	I0725 17:10:29.731557   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:10:29.731584   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/key.pem (1675 bytes)
	I0725 17:10:29.731661   33162 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem (1708 bytes)
	I0725 17:10:29.732224   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 17:10:29.749516   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:10:29.767634   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:10:29.784829   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/newest-cni-20220725170926-14919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:10:29.802003   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:10:29.819158   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 17:10:29.837643   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:10:29.854418   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 17:10:29.871121   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/ssl/certs/149192.pem --> /usr/share/ca-certificates/149192.pem (1708 bytes)
	I0725 17:10:29.888831   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:10:29.906470   33162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/certs/14919.pem --> /usr/share/ca-certificates/14919.pem (1338 bytes)
	I0725 17:10:29.923739   33162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:10:29.935970   33162 ssh_runner.go:195] Run: openssl version
	I0725 17:10:29.941798   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149192.pem && ln -fs /usr/share/ca-certificates/149192.pem /etc/ssl/certs/149192.pem"
	I0725 17:10:29.949647   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.953437   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 22:58 /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.953475   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149192.pem
	I0725 17:10:29.958685   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149192.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:10:29.965689   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:10:29.973553   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.977512   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.977554   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:10:29.984480   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:10:29.991634   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14919.pem && ln -fs /usr/share/ca-certificates/14919.pem /etc/ssl/certs/14919.pem"
	I0725 17:10:29.999492   33162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.003199   33162 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 22:58 /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.003247   33162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14919.pem
	I0725 17:10:30.008320   33162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14919.pem /etc/ssl/certs/51391683.0"
	I0725 17:10:30.015441   33162 kubeadm.go:395] StartCluster: {Name:newest-cni-20220725170926-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220725170926-14919 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 17:10:30.015575   33162 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:10:30.043891   33162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:10:30.051217   33162 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 17:10:30.051232   33162 kubeadm.go:626] restartCluster start
	I0725 17:10:30.051280   33162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 17:10:30.057850   33162 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.057966   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:30.133450   33162 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220725170926-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:30.133609   33162 kubeconfig.go:127] "newest-cni-20220725170926-14919" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig - will repair!
	I0725 17:10:30.133957   33162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:30.135316   33162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 17:10:30.142665   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.142722   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.150789   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.350928   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.351070   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.360111   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.551272   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.551407   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.562094   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.751690   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.751824   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.761903   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:30.952947   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:30.953087   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:30.963586   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.152852   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.153026   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.163487   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.350935   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.351078   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.360517   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.552584   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.552823   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.563420   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.752110   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.752218   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.763404   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:31.952598   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:31.952755   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:31.963313   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.152570   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.152722   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.163109   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.352596   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.352784   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.363770   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.550939   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.551002   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.560558   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.752982   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.753160   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.763614   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:32.951083   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:32.951172   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:32.960400   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.153040   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:33.153150   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:33.163587   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.163603   33162 api_server.go:165] Checking apiserver status ...
	I0725 17:10:33.163648   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 17:10:33.171326   33162 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.171337   33162 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 17:10:33.171344   33162 kubeadm.go:1092] stopping kube-system containers ...
	I0725 17:10:33.171406   33162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 17:10:33.202416   33162 docker.go:443] Stopping containers: [bb9e40d7b806 2e2b1e12a0d8 e3093a0bea73 a5c118b426c2 0f325df2490e b56e26e25b9e 78d80d7126ed eb8d77894732 c00a5e112263 54430765218a 22c1ccaaf65a 264f85de3b55 1ae34c8051d5 7e75f9965e1a 0c966b0d8030 caf103a64c25 3a3b08020459]
	I0725 17:10:33.202492   33162 ssh_runner.go:195] Run: docker stop bb9e40d7b806 2e2b1e12a0d8 e3093a0bea73 a5c118b426c2 0f325df2490e b56e26e25b9e 78d80d7126ed eb8d77894732 c00a5e112263 54430765218a 22c1ccaaf65a 264f85de3b55 1ae34c8051d5 7e75f9965e1a 0c966b0d8030 caf103a64c25 3a3b08020459
	I0725 17:10:33.234063   33162 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 17:10:33.245377   33162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:10:33.253298   33162 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 26 00:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 26 00:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 26 00:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 26 00:09 /etc/kubernetes/scheduler.conf
	
	I0725 17:10:33.253358   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 17:10:33.261429   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 17:10:33.269924   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 17:10:33.277451   33162 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.277515   33162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:10:33.285562   33162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 17:10:33.294082   33162 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:10:33.294144   33162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:10:33.301728   33162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:10:33.309325   33162 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 17:10:33.309339   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:33.357983   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:33.990711   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.163018   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.211887   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:34.268693   33162 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:10:34.268801   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:34.814412   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:35.314380   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:35.329818   33162 api_server.go:71] duration metric: took 1.061125837s to wait for apiserver process to appear ...
	I0725 17:10:35.329834   33162 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:10:35.329847   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:35.331369   33162 api_server.go:256] stopped: https://127.0.0.1:52980/healthz: Get "https://127.0.0.1:52980/healthz": EOF
	I0725 17:10:35.832966   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:38.791798   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0725 17:10:38.791817   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0725 17:10:38.831613   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:38.839025   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:38.839048   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:39.331548   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:39.340855   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:39.340870   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:39.831506   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:39.837149   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 17:10:39.837177   33162 api_server.go:102] status: https://127.0.0.1:52980/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 17:10:40.331504   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:40.338177   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 200:
	ok
	I0725 17:10:40.344835   33162 api_server.go:140] control plane version: v1.24.3
	I0725 17:10:40.344850   33162 api_server.go:130] duration metric: took 5.014977391s to wait for apiserver health ...
	I0725 17:10:40.344856   33162 cni.go:95] Creating CNI manager for ""
	I0725 17:10:40.344860   33162 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 17:10:40.344872   33162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:10:40.352662   33162 system_pods.go:59] 9 kube-system pods found
	I0725 17:10:40.352682   33162 system_pods.go:61] "coredns-6d4b75cb6d-dmnl4" [75f79fe8-36b7-421f-bb6c-f04ddc553086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:40.352688   33162 system_pods.go:61] "coredns-6d4b75cb6d-nwgth" [9423c7c6-992c-437c-ad7e-28a2ab1eecdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:40.352693   33162 system_pods.go:61] "etcd-newest-cni-20220725170926-14919" [7aca802c-2727-4227-9c2c-c969f0a334cf] Running
	I0725 17:10:40.352697   33162 system_pods.go:61] "kube-apiserver-newest-cni-20220725170926-14919" [aa239dc3-e3c0-4446-957a-24cd198cbb3c] Running
	I0725 17:10:40.352701   33162 system_pods.go:61] "kube-controller-manager-newest-cni-20220725170926-14919" [a5400bd1-f383-426d-b6f6-265553b518ea] Running
	I0725 17:10:40.352704   33162 system_pods.go:61] "kube-proxy-thgm5" [2bd1bc65-9c26-4b8e-86b9-3e0bd3599e69] Running
	I0725 17:10:40.352709   33162 system_pods.go:61] "kube-scheduler-newest-cni-20220725170926-14919" [9eaafaf3-71e5-4e23-8f04-0b6c5c8e1357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 17:10:40.352718   33162 system_pods.go:61] "metrics-server-5c6f97fb75-lsp4c" [6751fa1e-1d48-4008-9432-cdac2124118b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:10:40.352722   33162 system_pods.go:61] "storage-provisioner" [50d8c534-72e2-4185-b2d1-5ce19567413e] Running
	I0725 17:10:40.352726   33162 system_pods.go:74] duration metric: took 7.849401ms to wait for pod list to return data ...
	I0725 17:10:40.352733   33162 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:10:40.355773   33162 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:10:40.355787   33162 node_conditions.go:123] node cpu capacity is 6
	I0725 17:10:40.355801   33162 node_conditions.go:105] duration metric: took 3.065104ms to run NodePressure ...
	I0725 17:10:40.355813   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 17:10:40.557739   33162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:10:40.604152   33162 ops.go:34] apiserver oom_adj: -16
	I0725 17:10:40.604170   33162 kubeadm.go:630] restartCluster took 10.552861404s
	I0725 17:10:40.604181   33162 kubeadm.go:397] StartCluster complete in 10.58867566s
	I0725 17:10:40.604201   33162 settings.go:142] acquiring lock: {Name:mkcd702d4f365962a78fa014f59c2f8489658e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:40.604299   33162 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 17:10:40.605113   33162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig: {Name:mkdad3cd1a8928cc2eb17d87854967e3e52d5524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:10:40.609196   33162 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220725170926-14919" rescaled to 1
	I0725 17:10:40.609249   33162 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 17:10:40.609304   33162 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 17:10:40.609312   33162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:10:40.633783   33162 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633785   33162 addons.go:65] Setting dashboard=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633802   33162 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220725170926-14919"
	I0725 17:10:40.633805   33162 addons.go:153] Setting addon dashboard=true in "newest-cni-20220725170926-14919"
	I0725 17:10:40.633804   33162 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220725170926-14919"
	I0725 17:10:40.609473   33162 config.go:178] Loaded profile config "newest-cni-20220725170926-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 17:10:40.633819   33162 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220725170926-14919"
	W0725 17:10:40.633826   33162 addons.go:162] addon metrics-server should already be in state true
	W0725 17:10:40.633817   33162 addons.go:162] addon storage-provisioner should already be in state true
	I0725 17:10:40.633679   33162 out.go:177] * Verifying Kubernetes components...
	I0725 17:10:40.633830   33162 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220725170926-14919"
	W0725 17:10:40.633816   33162 addons.go:162] addon dashboard should already be in state true
	I0725 17:10:40.633869   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.691865   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.633872   33162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220725170926-14919"
	I0725 17:10:40.691912   33162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:10:40.633885   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.692553   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692555   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692555   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.692649   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.825703   33162 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220725170926-14919"
	W0725 17:10:40.896300   33162 addons.go:162] addon default-storageclass should already be in state true
	I0725 17:10:40.896340   33162 host.go:66] Checking if "newest-cni-20220725170926-14919" exists ...
	I0725 17:10:40.826795   33162 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0725 17:10:40.826824   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:40.837858   33162 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 17:10:40.859007   33162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:10:40.896237   33162 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 17:10:40.898501   33162 cli_runner.go:164] Run: docker container inspect newest-cni-20220725170926-14919 --format={{.State.Status}}
	I0725 17:10:40.996949   33162 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 17:10:40.939331   33162 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:10:40.976132   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:10:40.997061   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:10:41.035393   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 17:10:41.035423   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 17:10:41.035395   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:10:41.035542   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.035653   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.035666   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.062617   33162 api_server.go:51] waiting for apiserver process to appear ...
	I0725 17:10:41.062869   33162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:10:41.064899   33162 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:10:41.064918   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:10:41.065018   33162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725170926-14919
	I0725 17:10:41.078455   33162 api_server.go:71] duration metric: took 469.124908ms to wait for apiserver process to appear ...
	I0725 17:10:41.078510   33162 api_server.go:87] waiting for apiserver healthz status ...
	I0725 17:10:41.078542   33162 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52980/healthz ...
	I0725 17:10:41.088995   33162 api_server.go:266] https://127.0.0.1:52980/healthz returned 200:
	ok
	I0725 17:10:41.090707   33162 api_server.go:140] control plane version: v1.24.3
	I0725 17:10:41.090725   33162 api_server.go:130] duration metric: took 12.204434ms to wait for apiserver health ...
	I0725 17:10:41.090732   33162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:10:41.100882   33162 system_pods.go:59] 9 kube-system pods found
	I0725 17:10:41.100912   33162 system_pods.go:61] "coredns-6d4b75cb6d-dmnl4" [75f79fe8-36b7-421f-bb6c-f04ddc553086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:41.100936   33162 system_pods.go:61] "coredns-6d4b75cb6d-nwgth" [9423c7c6-992c-437c-ad7e-28a2ab1eecdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:10:41.100947   33162 system_pods.go:61] "etcd-newest-cni-20220725170926-14919" [7aca802c-2727-4227-9c2c-c969f0a334cf] Running
	I0725 17:10:41.100956   33162 system_pods.go:61] "kube-apiserver-newest-cni-20220725170926-14919" [aa239dc3-e3c0-4446-957a-24cd198cbb3c] Running
	I0725 17:10:41.100967   33162 system_pods.go:61] "kube-controller-manager-newest-cni-20220725170926-14919" [a5400bd1-f383-426d-b6f6-265553b518ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 17:10:41.100973   33162 system_pods.go:61] "kube-proxy-thgm5" [2bd1bc65-9c26-4b8e-86b9-3e0bd3599e69] Running
	I0725 17:10:41.100990   33162 system_pods.go:61] "kube-scheduler-newest-cni-20220725170926-14919" [9eaafaf3-71e5-4e23-8f04-0b6c5c8e1357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 17:10:41.100996   33162 system_pods.go:61] "metrics-server-5c6f97fb75-lsp4c" [6751fa1e-1d48-4008-9432-cdac2124118b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:10:41.101006   33162 system_pods.go:61] "storage-provisioner" [50d8c534-72e2-4185-b2d1-5ce19567413e] Running
	I0725 17:10:41.101012   33162 system_pods.go:74] duration metric: took 10.276317ms to wait for pod list to return data ...
	I0725 17:10:41.101018   33162 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:10:41.104454   33162 default_sa.go:45] found service account: "default"
	I0725 17:10:41.104471   33162 default_sa.go:55] duration metric: took 3.4456ms for default service account to be created ...
	I0725 17:10:41.104481   33162 kubeadm.go:572] duration metric: took 495.187773ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0725 17:10:41.104501   33162 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:10:41.109202   33162 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0725 17:10:41.109220   33162 node_conditions.go:123] node cpu capacity is 6
	I0725 17:10:41.109230   33162 node_conditions.go:105] duration metric: took 4.725267ms to run NodePressure ...
	I0725 17:10:41.109240   33162 start.go:216] waiting for startup goroutines ...
	I0725 17:10:41.154538   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.155606   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.159137   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.171747   33162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52976 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/newest-cni-20220725170926-14919/id_rsa Username:docker}
	I0725 17:10:41.277597   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:10:41.277615   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 17:10:41.277691   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 17:10:41.277701   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 17:10:41.288691   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:10:41.300595   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:10:41.300646   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:10:41.305575   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 17:10:41.305588   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 17:10:41.305589   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:10:41.320296   33162 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:10:41.320311   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:10:41.325064   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 17:10:41.325088   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 17:10:41.343386   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:10:41.352726   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 17:10:41.352746   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 17:10:41.429907   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 17:10:41.429923   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 17:10:41.447523   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 17:10:41.447537   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 17:10:41.516266   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 17:10:41.516285   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 17:10:41.535836   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 17:10:41.535850   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 17:10:41.554058   33162 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:10:41.554073   33162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 17:10:41.572255   33162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 17:10:42.151794   33162 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220725170926-14919"
	I0725 17:10:42.288108   33162 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 17:10:42.345471   33162 addons.go:414] enableAddons completed in 1.736158741s
	I0725 17:10:42.381249   33162 start.go:506] kubectl: 1.24.1, cluster: 1.24.3 (minor skew: 0)
	I0725 17:10:42.403296   33162 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220725170926-14919" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-07-26 00:10:25 UTC, end at Tue 2022-07-26 00:11:28 UTC. --
	Jul 26 00:10:41 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:41.180559056Z" level=info msg="ignoring event" container=4453b77131a787200fe4628ba95c4651cae4b07ee0e7d1dee55830d11a39f504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:42 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:42.653384484Z" level=info msg="ignoring event" container=d84f255b5da60225a20db24129e9ed5389967f70e6e0882758969aa5f15e755b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:42 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:42.726537855Z" level=info msg="ignoring event" container=6521a581957e6bbfe598aff584db6c5364a5414bf399606d5e8cad159168a004 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:43 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:43.524620787Z" level=info msg="ignoring event" container=195368b19c556368665cfea91eb39a0328b0d6837c63098f754e07ea3a2ebe95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:10:43 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:10:43.546319588Z" level=info msg="ignoring event" container=a443999ee44bdf2fdb6b53f1800e217508044feb54bc9bd869b0329bae99b7fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:21 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:21.072809952Z" level=info msg="ignoring event" container=6e0a5c0d14245dc4e5def32fbd6bebc2b28acfbcc35cb087e3bf4c1b54832b9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:21 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:21.615358392Z" level=info msg="ignoring event" container=0eb2ec416adbb025a17087009795b583d3455ff3a9e55d26497dbad06a18271f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:21 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:21.686492138Z" level=info msg="ignoring event" container=e392cf371d3e3cf21ac32f2eced22a78948eb017bfc4b6256bdeff2a801356d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:21 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:21.835121057Z" level=info msg="ignoring event" container=b062c20a252ee1bf3d545ba9d6fbd2ab0e107ae11a9a18bca0546f076ab40af3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:23 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:23.508859534Z" level=info msg="ignoring event" container=ff62a12142773b808694d0014fa2ed0c493080493abe551b0759581e0231965d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:23 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:23.516030514Z" level=info msg="ignoring event" container=a6b9eb0ef2cabca23f85a9309a095ba36c5138335be30184ca89befacf496bb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:23 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:23.524379226Z" level=info msg="ignoring event" container=172c65e00048a699dbdb9caa4f00095caf42ef9af3f94f6f66f4853041379ed7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:23 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:23.531396902Z" level=info msg="ignoring event" container=f91fd8e4f3cc2c42b4e7b643e51b5f41c1418ec954c4877adc52376acb7eeb71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:24 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:24.453114807Z" level=info msg="ignoring event" container=19ec798472a47314711defd4d13bdd3cba526385be9d4b105a79d520e4dc230f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:24 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:24.483829880Z" level=info msg="ignoring event" container=0a3ec59afeb1e8d4c3a5b477983aa6b77b54ef07b8fde788ebecaa5d9c6e89d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:24 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:24.505978231Z" level=info msg="ignoring event" container=ec68d3380ba1c4f7839981a31b12e9f7beecdbb61e4bff8e1929a9679ed4c908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:24 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:24.576759338Z" level=info msg="ignoring event" container=82f349a6c0263bb5739b3ff385a18207b70eb9d20e8b10195711f1b81e053044 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:26 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:26.886214457Z" level=info msg="ignoring event" container=92c2a3bb5c26dad0fad29873a8b136814a5ef359ff48e07fbe3e0b2d9c982c55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:26 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:26.894407906Z" level=info msg="ignoring event" container=bcd5ab10213a387caef637a54ba64baf423f5f7c54e2c9a6396818d68bbff0c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:26 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:26.901984346Z" level=info msg="ignoring event" container=6ff97410428339d53e03af12fbe2624c84503a4874a6e8e85348240f6de912bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:27 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:27.257451467Z" level=info msg="ignoring event" container=f5731c05899f52bf92ef4cc3a6d3fb9850b265e69dc9c7219ab910f8b929cf18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:28.609233847Z" level=info msg="ignoring event" container=c83b7466ec3672178de94b651394f0264ad448aab2ffda0d8057ccb9ba4acbaa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:28.682722502Z" level=info msg="ignoring event" container=61003072dfd24e87e08dd9dcfd68f27dd88ce972d28799fe6766e608c275cbd8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:28.684795373Z" level=info msg="ignoring event" container=03ad0eb2cf58fa79734aa35da8f655041cecce956ff6f0457d088ea5d931815d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 26 00:11:28 newest-cni-20220725170926-14919 dockerd[599]: time="2022-07-26T00:11:28.698488748Z" level=info msg="ignoring event" container=d206ba1d34643e1e79a4de3e134691256fcfbd7d460d901911421b24545de74f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	808bbbe79ed5e       6e38f40d628db       49 seconds ago       Running             storage-provisioner       1                   e6aec7f6992b8
	42627e85bae2f       2ae1ba6417cbc       50 seconds ago       Running             kube-proxy                1                   92e03cae9e64e
	eccbd318b51f9       586c112956dfc       54 seconds ago       Running             kube-controller-manager   1                   4b390efc25066
	608941598c4c9       d521dd763e2e3       54 seconds ago       Running             kube-apiserver            1                   59c6e5e4348f7
	cd7d7c0a4b6e5       aebe758cef4cd       54 seconds ago       Running             etcd                      1                   f1808f8313c96
	4ae07a81558e3       3a5aa3a515f5d       54 seconds ago       Running             kube-scheduler            1                   b1eb14474d4fb
	e3093a0bea734       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   a5c118b426c2d
	eb8d778947323       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                0                   c00a5e1122630
	54430765218a2       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   22c1ccaaf65af
	264f85de3b55e       586c112956dfc       About a minute ago   Exited              kube-controller-manager   0                   0c966b0d8030f
	1ae34c8051d51       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            0                   3a3b080204591
	7e75f9965e1a6       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            0                   caf103a64c255
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220725170926-14919
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220725170926-14919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4443934bf233ca7893434b640d9d8995991115b
	                    minikube.k8s.io/name=newest-cni-20220725170926-14919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T17_09_54_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Jul 2022 00:09:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220725170926-14919
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Jul 2022 00:11:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Jul 2022 00:11:17 +0000   Tue, 26 Jul 2022 00:09:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Jul 2022 00:11:17 +0000   Tue, 26 Jul 2022 00:09:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Jul 2022 00:11:17 +0000   Tue, 26 Jul 2022 00:09:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Jul 2022 00:11:17 +0000   Tue, 26 Jul 2022 00:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20220725170926-14919
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                0b73bc9f-1df2-4cb3-ad1c-9ce261e8373c
	  Boot ID:                    95c3cee9-5325-46b1-8645-b2afb4791ab2
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-nwgth                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     81s
	  kube-system                 etcd-newest-cni-20220725170926-14919                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kube-apiserver-newest-cni-20220725170926-14919             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-newest-cni-20220725170926-14919    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-thgm5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-newest-cni-20220725170926-14919             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 metrics-server-5c6f97fb75-lsp4c                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-f9vrj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-qnd8s                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 80s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  106s (x4 over 106s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x4 over 106s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x4 over 106s)  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 95s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s                  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s                  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s                  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           82s                  node-controller  Node newest-cni-20220725170926-14919 event: Registered Node newest-cni-20220725170926-14919 in Controller
	  Normal  NodeAllocatableEnforced  55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    55s (x5 over 55s)    kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x5 over 55s)    kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  55s (x5 over 55s)    kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s                  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s                  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s                  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             12s                  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeNotReady
	  Normal  NodeReady                12s                  kubelet          Node newest-cni-20220725170926-14919 status is now: NodeReady
	  Normal  RegisteredNode           11s                  node-controller  Node newest-cni-20220725170926-14919 event: Registered Node newest-cni-20220725170926-14919 in Controller
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [54430765218a] <==
	* {"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:09:49.394Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220725170926-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:09:49.395Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:09:49.396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-26T00:09:49.396Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-26T00:09:49.396Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-26T00:09:49.397Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-26T00:10:12.031Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-26T00:10:12.031Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220725170926-14919","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/07/26 00:10:12 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/26 00:10:12 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-26T00:10:12.041Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-07-26T00:10:12.043Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:10:12.043Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:10:12.043Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220725170926-14919","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [cd7d7c0a4b6e] <==
	* {"level":"info","ts":"2022-07-26T00:10:35.378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-26T00:10:35.378Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:10:35.427Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-26T00:10:35.430Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-26T00:10:35.430Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-26T00:10:35.431Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-26T00:10:35.431Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:10:35.431Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-07-26T00:10:37.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-26T00:10:37.046Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220725170926-14919 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-26T00:10:37.046Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:10:37.046Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-26T00:10:37.047Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-26T00:10:37.047Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-26T00:10:37.048Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-26T00:10:37.048Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-26T00:11:22.789Z","caller":"traceutil/trace.go:171","msg":"trace[216262891] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"131.655757ms","start":"2022-07-26T00:11:22.657Z","end":"2022-07-26T00:11:22.789Z","steps":["trace[216262891] 'process raft request'  (duration: 91.967897ms)","trace[216262891] 'compare'  (duration: 39.567562ms)"],"step_count":2}
	{"level":"warn","ts":"2022-07-26T00:11:25.446Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"152.915486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-f9vrj.1705389c82f547bf\" ","response":"range_response_count:1 size:786"}
	{"level":"info","ts":"2022-07-26T00:11:25.446Z","caller":"traceutil/trace.go:171","msg":"trace[969874608] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-f9vrj.1705389c82f547bf; range_end:; response_count:1; response_revision:541; }","duration":"153.068111ms","start":"2022-07-26T00:11:25.293Z","end":"2022-07-26T00:11:25.446Z","steps":["trace[969874608] 'agreement among raft nodes before linearized reading'  (duration: 36.237188ms)","trace[969874608] 'range keys from in-memory index tree'  (duration: 116.642967ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  00:11:30 up  1:18,  0 users,  load average: 1.95, 1.16, 1.06
	Linux newest-cni-20220725170926-14919 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1ae34c8051d5] <==
	* W0726 00:10:13.036541       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036544       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036565       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036571       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036588       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036589       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036602       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036613       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036614       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036565       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036631       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036643       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036651       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036664       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036668       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036682       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036683       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036687       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036693       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036706       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036752       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036771       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036789       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036915       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0726 00:10:13.036929       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [608941598c4c] <==
	* I0726 00:10:38.858848       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0726 00:10:38.858852       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0726 00:10:38.906530       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0726 00:10:38.920582       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0726 00:10:39.545723       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0726 00:10:39.763394       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0726 00:10:39.926282       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:10:39.926304       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0726 00:10:39.926310       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0726 00:10:39.926347       1 handler_proxy.go:102] no RequestInfo found in the context
	E0726 00:10:39.926379       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0726 00:10:39.927513       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0726 00:10:40.112677       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0726 00:10:40.451373       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0726 00:10:40.466448       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0726 00:10:40.527879       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0726 00:10:40.541711       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0726 00:10:40.547449       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0726 00:10:42.049725       1 controller.go:611] quota admission added evaluator for: namespaces
	I0726 00:10:42.247266       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.148.216]
	I0726 00:10:42.258091       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.107.175.56]
	I0726 00:11:17.320510       1 controller.go:611] quota admission added evaluator for: endpoints
	I0726 00:11:18.399010       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0726 00:11:18.548520       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [264f85de3b55] <==
	* I0726 00:10:07.914055       1 shared_informer.go:262] Caches are synced for job
	I0726 00:10:07.914086       1 shared_informer.go:262] Caches are synced for attach detach
	I0726 00:10:07.914305       1 shared_informer.go:262] Caches are synced for PVC protection
	I0726 00:10:07.915641       1 shared_informer.go:262] Caches are synced for persistent volume
	I0726 00:10:07.972311       1 shared_informer.go:262] Caches are synced for resource quota
	I0726 00:10:07.990545       1 shared_informer.go:262] Caches are synced for taint
	I0726 00:10:07.990701       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0726 00:10:07.990869       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220725170926-14919. Assuming now as a timestamp.
	I0726 00:10:07.990991       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0726 00:10:07.990989       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0726 00:10:07.991269       1 event.go:294] "Event occurred" object="newest-cni-20220725170926-14919" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220725170926-14919 event: Registered Node newest-cni-20220725170926-14919 in Controller"
	I0726 00:10:08.015359       1 shared_informer.go:262] Caches are synced for endpoint
	I0726 00:10:08.015435       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0726 00:10:08.018502       1 shared_informer.go:262] Caches are synced for resource quota
	I0726 00:10:08.434369       1 shared_informer.go:262] Caches are synced for garbage collector
	I0726 00:10:08.465396       1 shared_informer.go:262] Caches are synced for garbage collector
	I0726 00:10:08.465439       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0726 00:10:08.619948       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-thgm5"
	I0726 00:10:08.668099       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0726 00:10:08.818220       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dmnl4"
	I0726 00:10:08.823546       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nwgth"
	I0726 00:10:08.865894       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0726 00:10:08.931768       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-dmnl4"
	I0726 00:10:11.235602       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0726 00:10:11.241626       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-lsp4c"
	
	* 
	* ==> kube-controller-manager [eccbd318b51f] <==
	* I0726 00:11:18.221650       1 shared_informer.go:262] Caches are synced for attach detach
	I0726 00:11:18.222661       1 shared_informer.go:262] Caches are synced for GC
	I0726 00:11:18.225486       1 shared_informer.go:262] Caches are synced for job
	I0726 00:11:18.228807       1 shared_informer.go:262] Caches are synced for taint
	I0726 00:11:18.228890       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0726 00:11:18.228962       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220725170926-14919. Assuming now as a timestamp.
	I0726 00:11:18.228989       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0726 00:11:18.229017       1 event.go:294] "Event occurred" object="newest-cni-20220725170926-14919" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220725170926-14919 event: Registered Node newest-cni-20220725170926-14919 in Controller"
	I0726 00:11:18.229038       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0726 00:11:18.230650       1 shared_informer.go:262] Caches are synced for ephemeral
	I0726 00:11:18.287050       1 shared_informer.go:262] Caches are synced for PVC protection
	I0726 00:11:18.295799       1 shared_informer.go:262] Caches are synced for deployment
	I0726 00:11:18.296612       1 shared_informer.go:262] Caches are synced for persistent volume
	I0726 00:11:18.300007       1 shared_informer.go:262] Caches are synced for daemon sets
	I0726 00:11:18.308597       1 shared_informer.go:262] Caches are synced for endpoint
	I0726 00:11:18.311172       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0726 00:11:18.396024       1 shared_informer.go:262] Caches are synced for resource quota
	I0726 00:11:18.399423       1 shared_informer.go:262] Caches are synced for resource quota
	I0726 00:11:18.551728       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0726 00:11:18.553921       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0726 00:11:18.702362       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-qnd8s"
	I0726 00:11:18.705208       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-f9vrj"
	I0726 00:11:18.820024       1 shared_informer.go:262] Caches are synced for garbage collector
	I0726 00:11:18.899968       1 shared_informer.go:262] Caches are synced for garbage collector
	I0726 00:11:18.900002       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [42627e85bae2] <==
	* I0726 00:10:40.093380       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0726 00:10:40.093436       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0726 00:10:40.093456       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0726 00:10:40.109947       1 server_others.go:206] "Using iptables Proxier"
	I0726 00:10:40.110029       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0726 00:10:40.110038       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0726 00:10:40.110047       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0726 00:10:40.110069       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:10:40.110231       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:10:40.110362       1 server.go:661] "Version info" version="v1.24.3"
	I0726 00:10:40.110388       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:10:40.110880       1 config.go:317] "Starting service config controller"
	I0726 00:10:40.110935       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0726 00:10:40.111192       1 config.go:444] "Starting node config controller"
	I0726 00:10:40.111197       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0726 00:10:40.111214       1 config.go:226] "Starting endpoint slice config controller"
	I0726 00:10:40.111217       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0726 00:10:40.211641       1 shared_informer.go:262] Caches are synced for service config
	I0726 00:10:40.211674       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0726 00:10:40.211731       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [eb8d77894732] <==
	* I0726 00:10:09.139940       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0726 00:10:09.139992       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0726 00:10:09.140012       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0726 00:10:09.167842       1 server_others.go:206] "Using iptables Proxier"
	I0726 00:10:09.167886       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0726 00:10:09.167893       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0726 00:10:09.167903       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0726 00:10:09.168144       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:10:09.168480       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0726 00:10:09.169215       1 server.go:661] "Version info" version="v1.24.3"
	I0726 00:10:09.169298       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:10:09.169749       1 config.go:317] "Starting service config controller"
	I0726 00:10:09.169802       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0726 00:10:09.170316       1 config.go:444] "Starting node config controller"
	I0726 00:10:09.170401       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0726 00:10:09.170436       1 config.go:226] "Starting endpoint slice config controller"
	I0726 00:10:09.170564       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0726 00:10:09.270361       1 shared_informer.go:262] Caches are synced for service config
	I0726 00:10:09.271609       1 shared_informer.go:262] Caches are synced for node config
	I0726 00:10:09.274910       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [4ae07a81558e] <==
	* W0726 00:10:35.437985       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0726 00:10:35.949769       1 serving.go:348] Generated self-signed cert in-memory
	W0726 00:10:38.811321       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0726 00:10:38.811426       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0726 00:10:38.811434       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0726 00:10:38.811440       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0726 00:10:38.827253       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0726 00:10:38.827329       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0726 00:10:38.829104       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0726 00:10:38.829609       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0726 00:10:38.829628       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0726 00:10:38.831475       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0726 00:10:38.932990       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [7e75f9965e1a] <==
	* E0726 00:09:51.782153       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0726 00:09:51.782230       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0726 00:09:51.782294       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0726 00:09:51.782817       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:09:51.782898       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:09:51.782933       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0726 00:09:51.783020       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0726 00:09:52.599735       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0726 00:09:52.599847       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0726 00:09:52.614158       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0726 00:09:52.614187       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0726 00:09:52.728213       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0726 00:09:52.729738       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0726 00:09:52.729692       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0726 00:09:52.729931       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0726 00:09:52.739693       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0726 00:09:52.739727       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0726 00:09:52.781650       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0726 00:09:52.781739       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0726 00:09:52.901090       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0726 00:09:52.901262       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0726 00:09:55.378413       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0726 00:10:12.026583       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0726 00:10:12.026606       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0726 00:10:12.026971       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-07-26 00:10:25 UTC, end at Tue 2022-07-26 00:11:32 UTC. --
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         ]
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:  > pod="kube-system/coredns-6d4b75cb6d-nwgth"
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:31.347950    3620 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-nwgth_kube-system(9423c7c6-992c-437c-ad7e-28a2ab1eecdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-nwgth_kube-system(9423c7c6-992c-437c-ad7e-28a2ab1eecdc)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"19edcb576ac4164c51adb33f16d15d11ed0878272554dab0c64c1e4fc6b9cab1\\\" network for pod \\\"coredns-6d4b75cb6d-nwgth\\\": networkPlugin cni failed to set up pod \\\"coredns-6d4b75cb6d-nwgth_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"19edcb576ac4164c51adb33f16d15d11ed0878272554dab0c64c1e4fc6b9cab1\\\" network for pod \\\"coredns-6d4b75cb6d-nwgth\\\": networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-nwgth_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.36 -j CNI-ad6030388a7ab24678790d65 -m comment --comment name: \\\"crio\\\" id: \\\"19edcb576ac4164c51adb33f16d15d11ed0878272554dab0c64c1e4fc6b9cab1\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ad6030388a7ab24678790d65':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-6d4b75cb6d-nwgth" podUID=9423c7c6-992c-437c-ad7e-28a2ab1eecdc
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:31.347968    3620 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         rpc error: code = Unknown desc = [failed to set up sandbox container "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" network for pod "dashboard-metrics-scraper-dffd48c4c-f9vrj": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" network for pod "dashboard-metrics-scraper-dffd48c4c-f9vrj": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.37 -j CNI-f17b2eb7da9dd2f31f28b7cc -m comment --comment name: "crio" id: "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-f17b2eb7da9dd2f31f28b7cc':No such file or directory
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         ]
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:  >
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:31.347998    3620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         rpc error: code = Unknown desc = [failed to set up sandbox container "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" network for pod "dashboard-metrics-scraper-dffd48c4c-f9vrj": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" network for pod "dashboard-metrics-scraper-dffd48c4c-f9vrj": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.37 -j CNI-f17b2eb7da9dd2f31f28b7cc -m comment --comment name: "crio" id: "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-f17b2eb7da9dd2f31f28b7cc':No such file or directory
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         ]
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-f9vrj"
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:31.348015    3620 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         rpc error: code = Unknown desc = [failed to set up sandbox container "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" network for pod "dashboard-metrics-scraper-dffd48c4c-f9vrj": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" network for pod "dashboard-metrics-scraper-dffd48c4c-f9vrj": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.37 -j CNI-f17b2eb7da9dd2f31f28b7cc -m comment --comment name: "crio" id: "7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-f17b2eb7da9dd2f31f28b7cc':No such file or directory
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:         ]
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-f9vrj"
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]: E0726 00:11:31.348063    3620 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard(583917cc-373c-4d5a-8d68-6972ef0a0625)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard(583917cc-373c-4d5a-8d68-6972ef0a0625)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12\\\" network for pod \\\"dashboard-metrics-scraper-dffd48c4c-f9vrj\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12\\\" network for pod \\\"dashboar
d-metrics-scraper-dffd48c4c-f9vrj\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-dffd48c4c-f9vrj_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.37 -j CNI-f17b2eb7da9dd2f31f28b7cc -m comment --comment name: \\\"crio\\\" id: \\\"7b44e5bbd1615f2c550f7735b4fb8a4fc91d26b3d2a04993cc9fc0690e33ba12\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f17b2eb7da9dd2f31f28b7cc':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-f9vrj" podUID=583917cc-373c-4d5a-8d68-6972ef0a0625
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]: I0726 00:11:31.365323    3620 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="6f34382855b9fa929385065e8c839d8a18f06eb8197836bd0a1cf08dd0b9c2f1"
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]: I0726 00:11:31.377332    3620 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fefb813c8f8dc4ceb1e23763826ba113aca4c90417d516ffbe4be31de70c5781"
	Jul 26 00:11:31 newest-cni-20220725170926-14919 kubelet[3620]: I0726 00:11:31.392496    3620 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8ccab8ee2cce28c0a1a596ccdad4b00844a217b28126ef071ce5bcaa8e6a6c6e"
	
	* 
	* ==> storage-provisioner [808bbbe79ed5] <==
	* I0726 00:10:41.221365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0726 00:10:41.232689       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0726 00:10:41.232781       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0726 00:11:17.324569       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0726 00:11:17.324713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725170926-14919_101de8b1-4e96-49cb-bbf2-b75bfdf53fd7!
	I0726 00:11:17.324744       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6776a20a-b9cc-4a7f-abca-7da433162f63", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220725170926-14919_101de8b1-4e96-49cb-bbf2-b75bfdf53fd7 became leader
	I0726 00:11:17.425151       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725170926-14919_101de8b1-4e96-49cb-bbf2-b75bfdf53fd7!
	
	* 
	* ==> storage-provisioner [e3093a0bea73] <==
	* I0726 00:10:10.996092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0726 00:10:11.007658       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0726 00:10:11.007708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0726 00:10:11.028484       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0726 00:10:11.028674       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725170926-14919_69baca53-05f8-4024-b3bd-3d34ac15026e!
	I0726 00:10:11.028675       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6776a20a-b9cc-4a7f-abca-7da433162f63", APIVersion:"v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220725170926-14919_69baca53-05f8-4024-b3bd-3d34ac15026e became leader
	I0726 00:10:11.129208       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725170926-14919_69baca53-05f8-4024-b3bd-3d34ac15026e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220725170926-14919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-nwgth metrics-server-5c6f97fb75-lsp4c dashboard-metrics-scraper-dffd48c4c-f9vrj kubernetes-dashboard-5fd5574d9f-qnd8s
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220725170926-14919 describe pod coredns-6d4b75cb6d-nwgth metrics-server-5c6f97fb75-lsp4c dashboard-metrics-scraper-dffd48c4c-f9vrj kubernetes-dashboard-5fd5574d9f-qnd8s
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220725170926-14919 describe pod coredns-6d4b75cb6d-nwgth metrics-server-5c6f97fb75-lsp4c dashboard-metrics-scraper-dffd48c4c-f9vrj kubernetes-dashboard-5fd5574d9f-qnd8s: exit status 1 (267.487712ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-nwgth" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-lsp4c" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-f9vrj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-qnd8s" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220725170926-14919 describe pod coredns-6d4b75cb6d-nwgth metrics-server-5c6f97fb75-lsp4c dashboard-metrics-scraper-dffd48c4c-f9vrj kubernetes-dashboard-5fd5574d9f-qnd8s: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (50.69s)

                                                
                                    

Test pass (247/289)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 71.92
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.32
10 TestDownloadOnly/v1.24.3/json-events 4.47
14 TestDownloadOnly/v1.24.3/kubectl 0
15 TestDownloadOnly/v1.24.3/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.75
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.43
19 TestBinaryMirror 5.99
20 TestOffline 51.99
22 TestAddons/Setup 138.53
26 TestAddons/parallel/MetricsServer 5.6
27 TestAddons/parallel/HelmTiller 12.31
29 TestAddons/parallel/CSI 40.84
30 TestAddons/parallel/Headlamp 10.26
32 TestAddons/serial/GCPAuth 15.29
33 TestAddons/StoppedEnableDisable 12.99
34 TestCertOptions 33.86
35 TestCertExpiration 249.08
36 TestDockerFlags 33.49
37 TestForceSystemdFlag 34.47
38 TestForceSystemdEnv 34.82
40 TestHyperKitDriverInstallOrUpdate 6.57
43 TestErrorSpam/setup 27.66
44 TestErrorSpam/start 2.41
45 TestErrorSpam/status 1.36
46 TestErrorSpam/pause 1.96
47 TestErrorSpam/unpause 2.03
48 TestErrorSpam/stop 13.26
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 92.6
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 40.46
55 TestFunctional/serial/KubeContext 0.03
56 TestFunctional/serial/KubectlGetPods 1.59
59 TestFunctional/serial/CacheCmd/cache/add_remote 5.38
60 TestFunctional/serial/CacheCmd/cache/add_local 1.87
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
62 TestFunctional/serial/CacheCmd/cache/list 0.08
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.68
65 TestFunctional/serial/CacheCmd/cache/delete 0.15
66 TestFunctional/serial/MinikubeKubectlCmd 0.5
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
68 TestFunctional/serial/ExtraConfig 52.16
69 TestFunctional/serial/ComponentHealth 0.05
70 TestFunctional/serial/LogsCmd 3.12
71 TestFunctional/serial/LogsFileCmd 3.1
73 TestFunctional/parallel/ConfigCmd 0.48
74 TestFunctional/parallel/DashboardCmd 29.11
75 TestFunctional/parallel/DryRun 1.55
76 TestFunctional/parallel/InternationalLanguage 0.72
77 TestFunctional/parallel/StatusCmd 1.37
80 TestFunctional/parallel/ServiceCmd 13.54
82 TestFunctional/parallel/AddonsCmd 0.28
83 TestFunctional/parallel/PersistentVolumeClaim 26.73
85 TestFunctional/parallel/SSHCmd 1.02
86 TestFunctional/parallel/CpCmd 1.77
87 TestFunctional/parallel/MySQL 22.42
88 TestFunctional/parallel/FileSync 0.51
89 TestFunctional/parallel/CertSync 2.91
93 TestFunctional/parallel/NodeLabels 0.05
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
97 TestFunctional/parallel/Version/short 0.12
98 TestFunctional/parallel/Version/components 0.7
99 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
100 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
103 TestFunctional/parallel/ImageCommands/ImageBuild 3.52
104 TestFunctional/parallel/ImageCommands/Setup 2.4
105 TestFunctional/parallel/DockerEnv/bash 1.79
106 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
107 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.42
108 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.33
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.78
110 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.52
111 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.15
112 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.99
113 TestFunctional/parallel/ImageCommands/ImageRemove 0.75
114 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.74
115 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.74
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
117 TestFunctional/parallel/ProfileCmd/profile_list 0.55
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.66
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.18
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/MountCmd/any-port 10.6
130 TestFunctional/parallel/MountCmd/specific-port 2.81
131 TestFunctional/delete_addon-resizer_images 0.17
132 TestFunctional/delete_my-image_image 0.07
133 TestFunctional/delete_minikube_cached_images 0.07
143 TestJSONOutput/start/Command 41.44
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.68
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.72
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 12.43
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.77
168 TestKicCustomNetwork/create_custom_network 31.24
169 TestKicCustomNetwork/use_default_bridge_network 30.49
170 TestKicExistingNetwork 31.25
171 TestKicCustomSubnet 31.99
172 TestMainNoArgs 0.07
173 TestMinikubeProfile 66.17
176 TestMountStart/serial/StartWithMountFirst 7.88
177 TestMountStart/serial/VerifyMountFirst 0.44
178 TestMountStart/serial/StartWithMountSecond 7.72
179 TestMountStart/serial/VerifyMountSecond 0.45
180 TestMountStart/serial/DeleteFirst 2.27
181 TestMountStart/serial/VerifyMountPostDelete 0.49
182 TestMountStart/serial/Stop 1.63
183 TestMountStart/serial/RestartStopped 5.32
184 TestMountStart/serial/VerifyMountPostStop 0.44
187 TestMultiNode/serial/FreshStart2Nodes 107.06
188 TestMultiNode/serial/DeployApp2Nodes 6.57
189 TestMultiNode/serial/PingHostFrom2Pods 0.88
190 TestMultiNode/serial/AddNode 34.78
191 TestMultiNode/serial/ProfileList 0.6
192 TestMultiNode/serial/CopyFile 17.1
193 TestMultiNode/serial/StopNode 14.25
194 TestMultiNode/serial/StartAfterStop 19.95
195 TestMultiNode/serial/RestartKeepsNodes 136.89
196 TestMultiNode/serial/DeleteNode 18.86
197 TestMultiNode/serial/StopMultiNode 25.14
198 TestMultiNode/serial/RestartMultiNode 58.08
199 TestMultiNode/serial/ValidateNameConflict 32.06
205 TestScheduledStopUnix 103.75
206 TestSkaffold 63.6
208 TestInsufficientStorage 13.41
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 6.51
225 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 9.11
226 TestStoppedBinaryUpgrade/Setup 0.8
228 TestStoppedBinaryUpgrade/MinikubeLogs 3.56
230 TestPause/serial/Start 44.2
231 TestPause/serial/SecondStartNoReconfiguration 41.93
232 TestPause/serial/Pause 0.85
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.58
243 TestNoKubernetes/serial/StartWithK8s 30.35
244 TestNetworkPlugins/group/auto/Start 46.54
245 TestNoKubernetes/serial/StartWithStopK8s 17.72
246 TestNoKubernetes/serial/Start 6.86
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
248 TestNoKubernetes/serial/ProfileList 1.62
249 TestNoKubernetes/serial/Stop 1.68
250 TestNoKubernetes/serial/StartNoArgs 4.53
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.45
252 TestNetworkPlugins/group/kindnet/Start 51.21
253 TestNetworkPlugins/group/auto/KubeletFlags 0.51
254 TestNetworkPlugins/group/auto/NetCatPod 59.95
255 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
256 TestNetworkPlugins/group/kindnet/KubeletFlags 0.55
257 TestNetworkPlugins/group/kindnet/NetCatPod 13.96
258 TestNetworkPlugins/group/auto/DNS 0.13
259 TestNetworkPlugins/group/auto/Localhost 0.11
260 TestNetworkPlugins/group/auto/HairPin 5.12
261 TestNetworkPlugins/group/kindnet/DNS 0.15
262 TestNetworkPlugins/group/kindnet/Localhost 0.14
263 TestNetworkPlugins/group/kindnet/HairPin 0.12
264 TestNetworkPlugins/group/cilium/Start 84.37
265 TestNetworkPlugins/group/calico/Start 94.19
266 TestNetworkPlugins/group/cilium/ControllerPod 5.02
267 TestNetworkPlugins/group/cilium/KubeletFlags 0.52
268 TestNetworkPlugins/group/cilium/NetCatPod 14.36
269 TestNetworkPlugins/group/calico/ControllerPod 5.02
270 TestNetworkPlugins/group/calico/KubeletFlags 0.47
271 TestNetworkPlugins/group/calico/NetCatPod 12.82
272 TestNetworkPlugins/group/cilium/DNS 0.13
273 TestNetworkPlugins/group/cilium/Localhost 0.11
274 TestNetworkPlugins/group/cilium/HairPin 0.12
275 TestNetworkPlugins/group/false/Start 49.24
276 TestNetworkPlugins/group/calico/DNS 0.17
277 TestNetworkPlugins/group/calico/Localhost 0.14
278 TestNetworkPlugins/group/calico/HairPin 0.14
279 TestNetworkPlugins/group/bridge/Start 47.8
280 TestNetworkPlugins/group/false/KubeletFlags 0.47
281 TestNetworkPlugins/group/false/NetCatPod 14.25
282 TestNetworkPlugins/group/bridge/KubeletFlags 0.47
283 TestNetworkPlugins/group/bridge/NetCatPod 13.75
284 TestNetworkPlugins/group/false/DNS 0.13
285 TestNetworkPlugins/group/false/Localhost 0.11
286 TestNetworkPlugins/group/false/HairPin 5.12
287 TestNetworkPlugins/group/bridge/DNS 0.13
288 TestNetworkPlugins/group/bridge/Localhost 0.12
289 TestNetworkPlugins/group/bridge/HairPin 0.12
290 TestNetworkPlugins/group/enable-default-cni/Start 46.73
291 TestNetworkPlugins/group/kubenet/Start 45.72
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.79
294 TestNetworkPlugins/group/kubenet/KubeletFlags 0.97
295 TestNetworkPlugins/group/kubenet/NetCatPod 14.08
296 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
297 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
298 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
299 TestNetworkPlugins/group/kubenet/DNS 0.13
300 TestNetworkPlugins/group/kubenet/Localhost 0.12
305 TestStartStop/group/no-preload/serial/FirstStart 56.23
306 TestStartStop/group/no-preload/serial/DeployApp 10.77
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
308 TestStartStop/group/no-preload/serial/Stop 12.57
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.34
310 TestStartStop/group/no-preload/serial/SecondStart 301.81
313 TestStartStop/group/old-k8s-version/serial/Stop 1.68
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.05
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.6
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.51
321 TestStartStop/group/embed-certs/serial/FirstStart 50.17
322 TestStartStop/group/embed-certs/serial/DeployApp 10.71
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.74
324 TestStartStop/group/embed-certs/serial/Stop 12.69
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.36
326 TestStartStop/group/embed-certs/serial/SecondStart 302.47
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.02
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.59
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.48
333 TestStartStop/group/default-k8s-different-port/serial/FirstStart 46.25
334 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.73
335 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.71
336 TestStartStop/group/default-k8s-different-port/serial/Stop 12.63
337 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.34
338 TestStartStop/group/default-k8s-different-port/serial/SecondStart 306.25
339 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 7.02
340 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.81
341 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.5
344 TestStartStop/group/newest-cni/serial/FirstStart 44.27
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.8
348 TestStartStop/group/newest-cni/serial/Stop 12.59
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
350 TestStartStop/group/newest-cni/serial/SecondStart 18.65
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.57
x
+
TestDownloadOnly/v1.16.0/json-events (71.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220725155224-14919 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220725155224-14919 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (1m11.918133235s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (71.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220725155224-14919
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220725155224-14919: exit status 85 (317.146299ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220725155224-14919 | jenkins | v1.26.0 | 25 Jul 22 15:52 PDT |          |
	|         | download-only-20220725155224-14919 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 15:52:24
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 15:52:24.971359   14921 out.go:296] Setting OutFile to fd 1 ...
	I0725 15:52:24.971523   14921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 15:52:24.971529   14921 out.go:309] Setting ErrFile to fd 2...
	I0725 15:52:24.971532   14921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 15:52:24.971632   14921 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	W0725 15:52:24.971726   14921 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/config/config.json: no such file or directory
	I0725 15:52:24.972456   14921 out.go:303] Setting JSON to true
	I0725 15:52:24.987179   14921 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6468,"bootTime":1658783076,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 15:52:24.987272   14921 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 15:52:25.013952   14921 out.go:97] [download-only-20220725155224-14919] minikube v1.26.0 on Darwin 12.5
	I0725 15:52:25.014255   14921 notify.go:193] Checking for updates...
	W0725 15:52:25.014310   14921 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 15:52:25.035858   14921 out.go:169] MINIKUBE_LOCATION=14555
	I0725 15:52:25.057623   14921 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 15:52:25.080894   14921 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 15:52:25.103028   14921 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 15:52:25.124750   14921 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	W0725 15:52:25.168598   14921 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 15:52:25.169020   14921 driver.go:365] Setting default libvirt URI to qemu:///system
	W0725 15:53:24.450214   14921 docker.go:113] docker version returned error: deadline exceeded running "docker version --format {{.Server.Os}}-{{.Server.Version}}": signal: killed
	I0725 15:53:24.471940   14921 out.go:97] Using the docker driver based on user configuration
	I0725 15:53:24.471962   14921 start.go:284] selected driver: docker
	I0725 15:53:24.471984   14921 start.go:808] validating driver "docker" against <nil>
	I0725 15:53:24.472071   14921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 15:53:24.613674   14921 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 15:53:24.635681   14921 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0725 15:53:24.656762   14921 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0725 15:53:24.698518   14921 out.go:169] 
	W0725 15:53:24.719837   14921 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0725 15:53:24.740637   14921 out.go:169] 
	I0725 15:53:24.782649   14921 out.go:169] 
	W0725 15:53:24.803660   14921 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0725 15:53:24.803782   14921 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0725 15:53:24.803821   14921 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0725 15:53:24.824491   14921 out.go:169] 
	I0725 15:53:24.845756   14921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 15:53:24.994771   14921 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0725 15:53:25.016163   14921 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0725 15:53:25.016230   14921 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 15:53:25.060436   14921 out.go:169] 
	W0725 15:53:25.081244   14921 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0725 15:53:25.081351   14921 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0725 15:53:25.081384   14921 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0725 15:53:25.102267   14921 out.go:169] 
	I0725 15:53:25.144140   14921 out.go:169] 
	W0725 15:53:25.165421   14921 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0725 15:53:25.186285   14921 out.go:169] 
	I0725 15:53:25.207106   14921 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0725 15:53:25.207228   14921 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 15:53:25.228236   14921 out.go:169] Using Docker Desktop driver with root privileges
	I0725 15:53:25.249288   14921 cni.go:95] Creating CNI manager for ""
	I0725 15:53:25.249311   14921 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 15:53:25.249319   14921 start_flags.go:310] config:
	{Name:download-only-20220725155224-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220725155224-14919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 15:53:25.271124   14921 out.go:97] Starting control plane node download-only-20220725155224-14919 in cluster download-only-20220725155224-14919
	I0725 15:53:25.271158   14921 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 15:53:25.292274   14921 out.go:97] Pulling base image ...
	I0725 15:53:25.292337   14921 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 15:53:25.292393   14921 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 15:53:25.292538   14921 cache.go:107] acquiring lock: {Name:mk8fda3a81b59021c9135a18493bfc756ee2f248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 15:53:25.292587   14921 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/download-only-20220725155224-14919/config.json ...
	I0725 15:53:25.292619   14921 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/download-only-20220725155224-14919/config.json: {Name:mk3e0c979080a53999e7e406f8162e712944dc2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 15:53:25.292595   14921 cache.go:107] acquiring lock: {Name:mk08e328166f96dd4c805d32f95290c12dc4eba1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 15:53:25.293543   14921 cache.go:107] acquiring lock: {Name:mk4173c3bf80cd74f0095909729c8bd43a9bb88b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 15:53:25.293633   14921 cache.go:107] acquiring lock: {Name:mkefad7f65ecc5a4b9d4b4da022b244f7b287648 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 15:53:25.293603   14921 cache.go:107] acquiring lock: {Name:mk5c0ec1954405d846c6f8ffdfbb976f821ed0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 15:53:25.293635   14921 cache.go:107] acquiring lock: {Name:mk0fc225a85517e0cc247719f1294cbdac7f52e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 15:53:25.293753   14921 cache.go:107] acquiring lock: {Name:mka7597c5e722492430c43d644fc2fd7a5f39bbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 15:53:25.293793   14921 cache.go:107] acquiring lock: {Name:mkaaee44a65b6f8183058b64b1c985be89c02bd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 15:53:25.294473   14921 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 15:53:25.294730   14921 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0725 15:53:25.294731   14921 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0725 15:53:25.294733   14921 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0725 15:53:25.294767   14921 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0725 15:53:25.294767   14921 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0725 15:53:25.294779   14921 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0725 15:53:25.294789   14921 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0725 15:53:25.294882   14921 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 15:53:25.295112   14921 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0725 15:53:25.295115   14921 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0725 15:53:25.295120   14921 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0725 15:53:25.299613   14921 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 15:53:25.301703   14921 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 15:53:25.301771   14921 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 15:53:25.302127   14921 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 15:53:25.302250   14921 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 15:53:25.302448   14921 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 15:53:25.302777   14921 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 15:53:25.302971   14921 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 15:53:25.356209   14921 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0725 15:53:25.356417   14921 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory
	I0725 15:53:25.356534   14921 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0725 15:53:26.174568   14921 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0725 15:53:26.174983   14921 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0725 15:53:26.182775   14921 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0725 15:53:26.208121   14921 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0725 15:53:26.244314   14921 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 15:53:26.262572   14921 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0725 15:53:26.326350   14921 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0725 15:53:26.326372   14921 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 1.0327366s
	I0725 15:53:26.326384   14921 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0725 15:53:26.356825   14921 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0725 15:53:26.364877   14921 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0725 15:53:26.516912   14921 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0725 15:53:26.516928   14921 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.224408396s
	I0725 15:53:26.516938   14921 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0725 15:53:26.633892   14921 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0725 15:53:26.853261   14921 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0725 15:53:26.853285   14921 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 1.559795234s
	I0725 15:53:26.853301   14921 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0725 15:53:27.307512   14921 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0725 15:53:27.307533   14921 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 2.014240569s
	I0725 15:53:27.307541   14921 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0725 15:53:27.344475   14921 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0725 15:53:27.344491   14921 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 2.051102888s
	I0725 15:53:27.344501   14921 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0725 15:53:27.406246   14921 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0725 15:53:27.406262   14921 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 2.113694129s
	I0725 15:53:27.406272   14921 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0725 15:53:27.498630   14921 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0725 15:53:27.498648   14921 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 2.206069437s
	I0725 15:53:27.498664   14921 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0725 15:53:27.774699   14921 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0725 15:53:27.774728   14921 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 2.481068252s
	I0725 15:53:27.774737   14921 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0725 15:53:27.774750   14921 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220725155224-14919"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/json-events (4.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220725155224-14919 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220725155224-14919 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=docker --driver=docker : (4.471304589s)
--- PASS: TestDownloadOnly/v1.24.3/json-events (4.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/kubectl
--- PASS: TestDownloadOnly/v1.24.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220725155224-14919
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220725155224-14919: exit status 85 (293.898851ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220725155224-14919 | jenkins | v1.26.0 | 25 Jul 22 15:52 PDT |          |
	|         | download-only-20220725155224-14919 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	| start   | -o=json --download-only -p         | download-only-20220725155224-14919 | jenkins | v1.26.0 | 25 Jul 22 15:53 PDT |          |
	|         | download-only-20220725155224-14919 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.24.3       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 15:53:37
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 15:53:37.453902   16446 out.go:296] Setting OutFile to fd 1 ...
	I0725 15:53:37.454593   16446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 15:53:37.454601   16446 out.go:309] Setting ErrFile to fd 2...
	I0725 15:53:37.454622   16446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 15:53:37.454887   16446 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	W0725 15:53:37.455303   16446 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/config/config.json: no such file or directory
	I0725 15:53:37.455718   16446 out.go:303] Setting JSON to true
	I0725 15:53:37.471746   16446 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6541,"bootTime":1658783076,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 15:53:37.471880   16446 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 15:53:37.494098   16446 out.go:97] [download-only-20220725155224-14919] minikube v1.26.0 on Darwin 12.5
	I0725 15:53:37.494227   16446 notify.go:193] Checking for updates...
	W0725 15:53:37.494224   16446 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 15:53:37.515656   16446 out.go:169] MINIKUBE_LOCATION=14555
	I0725 15:53:37.536942   16446 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 15:53:37.557878   16446 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 15:53:37.618037   16446 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 15:53:37.681104   16446 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	W0725 15:53:37.764739   16446 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 15:53:37.765238   16446 config.go:178] Loaded profile config "download-only-20220725155224-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0725 15:53:37.765287   16446 start.go:716] api.Load failed for download-only-20220725155224-14919: filestore "download-only-20220725155224-14919": Docker machine "download-only-20220725155224-14919" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0725 15:53:37.765334   16446 driver.go:365] Setting default libvirt URI to qemu:///system
	W0725 15:53:37.765367   16446 start.go:716] api.Load failed for download-only-20220725155224-14919: filestore "download-only-20220725155224-14919": Docker machine "download-only-20220725155224-14919" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0725 15:53:37.830536   16446 docker.go:113] docker version returned error: exit status 1
	I0725 15:53:37.851721   16446 out.go:97] Using the docker driver based on existing profile
	I0725 15:53:37.851740   16446 start.go:284] selected driver: docker
	I0725 15:53:37.851746   16446 start.go:808] validating driver "docker" against &{Name:download-only-20220725155224-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220725155224-14919 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 15:53:37.851908   16446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 15:53:37.986954   16446 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 15:53:38.008581   16446 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0725 15:53:38.029576   16446 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0725 15:53:38.092455   16446 out.go:169] 
	W0725 15:53:38.113559   16446 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0725 15:53:38.134760   16446 out.go:169] 
	I0725 15:53:38.177381   16446 out.go:169] 
	W0725 15:53:38.198575   16446 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0725 15:53:38.198674   16446 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0725 15:53:38.198711   16446 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0725 15:53:38.219420   16446 out.go:169] 
	I0725 15:53:38.240466   16446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 15:53:38.375625   16446 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0725 15:53:38.413055   16446 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0725 15:53:38.450997   16446 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0725 15:53:38.511013   16446 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0725 15:53:38.613031   16446 out.go:169] 
	W0725 15:53:38.651029   16446 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0725 15:53:38.687868   16446 out.go:169] 
	I0725 15:53:38.800889   16446 out.go:169] 
	W0725 15:53:38.822854   16446 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0725 15:53:38.822946   16446 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0725 15:53:38.823001   16446 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0725 15:53:38.843842   16446 out.go:169] 
	I0725 15:53:38.906017   16446 out.go:169] 
	W0725 15:53:38.927061   16446 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0725 15:53:38.927186   16446 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0725 15:53:38.927219   16446 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0725 15:53:38.965007   16446 out.go:169] 
	I0725 15:53:39.066611   16446 out.go:169] 
	W0725 15:53:39.103738   16446 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0725 15:53:39.124774   16446 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220725155224-14919"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.75s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.75s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220725155224-14919
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                    
x
+
TestBinaryMirror (5.99s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220725155345-14919 --alsologtostderr --binary-mirror http://127.0.0.1:55199 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220725155345-14919 --alsologtostderr --binary-mirror http://127.0.0.1:55199 --driver=docker : (5.313821004s)
helpers_test.go:175: Cleaning up "binary-mirror-20220725155345-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220725155345-14919
--- PASS: TestBinaryMirror (5.99s)

                                                
                                    
x
+
TestOffline (51.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220725163045-14919 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220725163045-14919 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (48.915495185s)
helpers_test.go:175: Cleaning up "offline-docker-20220725163045-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220725163045-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220725163045-14919: (3.071986599s)
--- PASS: TestOffline (51.99s)

                                                
                                    
x
+
TestAddons/Setup (138.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220725155351-14919 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220725155351-14919 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m18.529361735s)
--- PASS: TestAddons/Setup (138.53s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 2.538767ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-8595bd7d4c-vdqgl" [12f1e093-2931-4851-a97a-80cae629158f] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009582609s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220725155351-14919 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725155351-14919 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.31s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.70901ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-hjq8j" [623edef7-e0c1-4fd9-988a-6255f1404cf8] Running
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009741078s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220725155351-14919 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220725155351-14919 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.773396743s)
addons_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725155351-14919 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 4.482147ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220725155351-14919 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:516: (dbg) Done: kubectl --context addons-20220725155351-14919 create -f testdata/csi-hostpath-driver/pvc.yaml: (3.0686592s)
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220725155351-14919 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220725155351-14919 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [1ca6a5a1-7d71-4526-8c57-a239c8a48174] Pending
helpers_test.go:342: "task-pv-pod" [1ca6a5a1-7d71-4526-8c57-a239c8a48174] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [1ca6a5a1-7d71-4526-8c57-a239c8a48174] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.011041551s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220725155351-14919 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220725155351-14919 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220725155351-14919 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220725155351-14919 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220725155351-14919 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220725155351-14919 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220725155351-14919 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220725155351-14919 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220725155351-14919 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [e9b257d4-ef19-4f93-b41d-510aa2d73823] Pending
helpers_test.go:342: "task-pv-pod-restore" [e9b257d4-ef19-4f93-b41d-510aa2d73823] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [e9b257d4-ef19-4f93-b41d-510aa2d73823] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.008054052s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220725155351-14919 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220725155351-14919 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220725155351-14919 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725155351-14919 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:590: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220725155351-14919 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.878481163s)
addons_test.go:594: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725155351-14919 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-20220725155351-14919 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-20220725155351-14919 --alsologtostderr -v=1: (1.249074288s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-h8zlg" [8700241f-8c52-4187-9bf7-522931455550] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-866f5bd7bc-h8zlg" [8700241f-8c52-4187-9bf7-522931455550] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.010893288s
--- PASS: TestAddons/parallel/Headlamp (10.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (15.29s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220725155351-14919 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220725155351-14919 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [aac85d69-a476-4126-a0b6-1f5f981b99c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [aac85d69-a476-4126-a0b6-1f5f981b99c2] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.007760926s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220725155351-14919 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220725155351-14919 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-20220725155351-14919 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220725155351-14919 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725155351-14919 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220725155351-14919 addons disable gcp-auth --alsologtostderr -v=1: (6.686354708s)
--- PASS: TestAddons/serial/GCPAuth (15.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.99s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220725155351-14919
addons_test.go:134: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220725155351-14919: (12.595170789s)
addons_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220725155351-14919
addons_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220725155351-14919
--- PASS: TestAddons/StoppedEnableDisable (12.99s)

                                                
                                    
x
+
TestCertOptions (33.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220725163217-14919 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220725163217-14919 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (30.102739296s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220725163217-14919 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220725163217-14919 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220725163217-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220725163217-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220725163217-14919: (2.751948025s)
--- PASS: TestCertOptions (33.86s)

                                                
                                    
x
+
TestCertExpiration (249.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220725163211-14919 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220725163211-14919 --memory=2048 --cert-expiration=3m --driver=docker : (31.686194299s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220725163211-14919 --memory=2048 --cert-expiration=8760h --driver=docker 
E0725 16:35:59.890716   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:36:10.630483   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220725163211-14919 --memory=2048 --cert-expiration=8760h --driver=docker : (34.613104127s)
helpers_test.go:175: Cleaning up "cert-expiration-20220725163211-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220725163211-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220725163211-14919: (2.784431235s)
--- PASS: TestCertExpiration (249.08s)

                                                
                                    
x
+
TestDockerFlags (33.49s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220725163143-14919 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0725 16:31:55.921077   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220725163143-14919 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (29.237532051s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220725163143-14919 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220725163143-14919 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220725163143-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220725163143-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220725163143-14919: (3.261700403s)
--- PASS: TestDockerFlags (33.49s)

                                                
                                    
x
+
TestForceSystemdFlag (34.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220725163137-14919 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220725163137-14919 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (30.957943634s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220725163137-14919 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220725163137-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220725163137-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220725163137-14919: (2.960804645s)
--- PASS: TestForceSystemdFlag (34.47s)

                                                
                                    
x
+
TestForceSystemdEnv (34.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220725163108-14919 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0725 16:31:10.628494   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220725163108-14919 --memory=2048 --alsologtostderr -v=5 --driver=docker : (30.581786581s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220725163108-14919 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220725163108-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220725163108-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220725163108-14919: (3.668997289s)
--- PASS: TestForceSystemdEnv (34.82s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.57s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.57s)

                                                
                                    
x
+
TestErrorSpam/setup (27.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220725155733-14919 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220725155733-14919 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 --driver=docker : (27.662471966s)
--- PASS: TestErrorSpam/setup (27.66s)

                                                
                                    
x
+
TestErrorSpam/start (2.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 start --dry-run
--- PASS: TestErrorSpam/start (2.41s)

                                                
                                    
x
+
TestErrorSpam/status (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 status
--- PASS: TestErrorSpam/status (1.36s)

                                                
                                    
x
+
TestErrorSpam/pause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 pause
--- PASS: TestErrorSpam/pause (1.96s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.03s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 unpause
--- PASS: TestErrorSpam/unpause (2.03s)

                                                
                                    
x
+
TestErrorSpam/stop (13.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 stop: (12.574948318s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725155733-14919 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220725155733-14919 stop
--- PASS: TestErrorSpam/stop (13.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/files/etc/test/nested/copy/14919/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (92.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m32.594596753s)
--- PASS: TestFunctional/serial/StartWithProxy (92.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --alsologtostderr -v=8: (40.464048146s)
functional_test.go:655: soft start took 40.464648738s for "functional-20220725155824-14919" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220725155824-14919 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220725155824-14919 get po -A: (1.589552333s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache add k8s.gcr.io/pause:3.1: (1.306626754s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache add k8s.gcr.io/pause:3.3: (2.133501575s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache add k8s.gcr.io/pause:latest: (1.938338923s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220725155824-14919 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3591438979/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache add minikube-local-cache-test:functional-20220725155824-14919
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache add minikube-local-cache-test:functional-20220725155824-14919: (1.341729305s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache delete minikube-local-cache-test:functional-20220725155824-14919
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220725155824-14919
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (438.927174ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 cache reload: (1.288530686s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 kubectl -- --context functional-20220725155824-14919 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220725155824-14919 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0725 16:01:10.503990   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:10.511956   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:10.524218   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:10.545121   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:10.587283   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:10.669589   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:10.830157   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:11.150341   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:11.790950   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:13.073375   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:15.634159   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:20.756361   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:01:30.996942   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.16113927s)
functional_test.go:753: restart took 52.16124303s for "functional-20220725155824-14919" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (52.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220725155824-14919 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 logs: (3.118981618s)
--- PASS: TestFunctional/serial/LogsCmd (3.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2702719853/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2702719853/001/logs.txt: (3.098366793s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725155824-14919 config get cpus: exit status 14 (56.078739ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725155824-14919 config get cpus: exit status 14 (54.714286ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220725155824-14919 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220725155824-14919 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 18877: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (709.20633ms)

                                                
                                                
-- stdout --
	* [functional-20220725155824-14919] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:02:53.629135   18774 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:02:53.629777   18774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:02:53.629786   18774 out.go:309] Setting ErrFile to fd 2...
	I0725 16:02:53.629793   18774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:02:53.630046   18774 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:02:53.630784   18774 out.go:303] Setting JSON to false
	I0725 16:02:53.647906   18774 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7097,"bootTime":1658783076,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:02:53.648038   18774 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:02:53.671261   18774 out.go:177] * [functional-20220725155824-14919] minikube v1.26.0 on Darwin 12.5
	I0725 16:02:53.714051   18774 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:02:53.736296   18774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:02:53.794974   18774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:02:53.854054   18774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:02:53.876124   18774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:02:53.898998   18774 config.go:178] Loaded profile config "functional-20220725155824-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:02:53.899651   18774 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:02:53.969414   18774 docker.go:137] docker version: linux-20.10.17
	I0725 16:02:53.969560   18774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:02:54.111866   18774 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 23:02:54.039873966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:02:54.154950   18774 out.go:177] * Using the docker driver based on existing profile
	I0725 16:02:54.176158   18774 start.go:284] selected driver: docker
	I0725 16:02:54.176296   18774 start.go:808] validating driver "docker" against &{Name:functional-20220725155824-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220725155824-14919 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:02:54.176546   18774 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:02:54.202220   18774 out.go:177] 
	W0725 16:02:54.223099   18774 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0725 16:02:54.244055   18774 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220725155824-14919 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (721.311191ms)

                                                
                                                
-- stdout --
	* [functional-20220725155824-14919] minikube v1.26.0 sur Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:02:44.746425   18606 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:02:44.746582   18606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:02:44.746587   18606 out.go:309] Setting ErrFile to fd 2...
	I0725 16:02:44.746590   18606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:02:44.746705   18606 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:02:44.747134   18606 out.go:303] Setting JSON to false
	I0725 16:02:44.762520   18606 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7088,"bootTime":1658783076,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0725 16:02:44.762645   18606 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 16:02:44.784796   18606 out.go:177] * [functional-20220725155824-14919] minikube v1.26.0 sur Darwin 12.5
	I0725 16:02:44.854544   18606 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 16:02:44.875604   18606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	I0725 16:02:44.897428   18606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 16:02:44.939400   18606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 16:02:44.997520   18606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	I0725 16:02:45.024363   18606 config.go:178] Loaded profile config "functional-20220725155824-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:02:45.025051   18606 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 16:02:45.102302   18606 docker.go:137] docker version: linux-20.10.17
	I0725 16:02:45.102418   18606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 16:02:45.246300   18606 info.go:265] docker info: {ID:ZBPM:TZ5A:N35E:7TT4:6J7K:XQ3N:UILG:GFPO:R7RK:6D5U:VYM6:SSGI Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-25 23:02:45.17033251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 16:02:45.289312   18606 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0725 16:02:45.310277   18606 start.go:284] selected driver: docker
	I0725 16:02:45.310301   18606 start.go:808] validating driver "docker" against &{Name:functional-20220725155824-14919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220725155824-14919 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 16:02:45.310382   18606 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 16:02:45.334365   18606 out.go:177] 
	W0725 16:02:45.355787   18606 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0725 16:02:45.377474   18606 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 status
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220725155824-14919 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220725155824-14919 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-jp265" [923f2280-2b8d-451b-a07b-954287c3da2e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-jp265" [923f2280-2b8d-451b-a07b-954287c3da2e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.008495422s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 service list: (1.343207388s)
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 service --namespace=default --https --url hello-node: (2.030102501s)
functional_test.go:1475: found endpoint: https://127.0.0.1:57053
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 service hello-node --url --format={{.IP}}: (2.029896711s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 service hello-node --url: (2.030320347s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:57119
--- PASS: TestFunctional/parallel/ServiceCmd (13.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [3953189a-6193-48ba-ba24-ef4ef7af14c9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010170135s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220725155824-14919 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220725155824-14919 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220725155824-14919 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220725155824-14919 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [bf2bae0f-4e24-4102-b03d-cd7b7a6e5d97] Pending
helpers_test.go:342: "sp-pod" [bf2bae0f-4e24-4102-b03d-cd7b7a6e5d97] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [bf2bae0f-4e24-4102-b03d-cd7b7a6e5d97] Running
E0725 16:02:32.438963   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007523107s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220725155824-14919 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220725155824-14919 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220725155824-14919 delete -f testdata/storage-provisioner/pod.yaml: (1.006076619s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220725155824-14919 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [a7c8aa47-1728-4c0e-af25-b5b00594baab] Pending
helpers_test.go:342: "sp-pod" [a7c8aa47-1728-4c0e-af25-b5b00594baab] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a7c8aa47-1728-4c0e-af25-b5b00594baab] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009285583s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220725155824-14919 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh -n functional-20220725155824-14919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 cp functional-20220725155824-14919:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd4156635834/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh -n functional-20220725155824-14919 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220725155824-14919 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-t9666" [275272c2-3f37-4c7e-bd9c-1bc933e87864] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-t9666" [275272c2-3f37-4c7e-bd9c-1bc933e87864] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.015698687s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220725155824-14919 exec mysql-67f7d69d8b-t9666 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220725155824-14919 exec mysql-67f7d69d8b-t9666 -- mysql -ppassword -e "show databases;": exit status 1 (122.446912ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220725155824-14919 exec mysql-67f7d69d8b-t9666 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220725155824-14919 exec mysql-67f7d69d8b-t9666 -- mysql -ppassword -e "show databases;": exit status 1 (129.612876ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220725155824-14919 exec mysql-67f7d69d8b-t9666 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/14919/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo cat /etc/test/nested/copy/14919/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/14919.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo cat /etc/ssl/certs/14919.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/14919.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo cat /usr/share/ca-certificates/14919.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/149192.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo cat /etc/ssl/certs/149192.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/149192.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo cat /usr/share/ca-certificates/149192.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220725155824-14919 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo systemctl is-active crio": exit status 1 (467.700353ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220725155824-14919
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/etcd                             | 3.5.3-0                         | aebe758cef4cd | 299MB  |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine                          | e46bcc6975310 | 23.5MB |
| k8s.gcr.io/kube-apiserver                   | v1.24.3                         | d521dd763e2e3 | 130MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>                          | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| docker.io/localhost/my-image                | functional-20220725155824-14919 | 246bbcb17d328 | 1.24MB |
| docker.io/library/nginx                     | latest                          | 670dcc86b69df | 142MB  |
| k8s.gcr.io/kube-controller-manager          | v1.24.3                         | 586c112956dfc | 119MB  |
| k8s.gcr.io/kube-scheduler                   | v1.24.3                         | 3a5aa3a515f5d | 51MB   |
| docker.io/library/mysql                     | 5.7                             | 459651132a111 | 429MB  |
| k8s.gcr.io/pause                            | 3.7                             | 221177c6082a8 | 711kB  |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220725155824-14919 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-20220725155824-14919 | 2b265b2427643 | 30B    |
| k8s.gcr.io/kube-proxy                       | v1.24.3                         | 2ae1ba6417cbc | 110MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
|---------------------------------------------|---------------------------------|---------------|--------|
2022/07/25 16:03:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls --format json:
[{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"246bbcb17d3284bfa327d07dc041a26b3c23374d76ccfe08a3fb059d39c19191","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220725155824-14919"],"size":"1240000"},{"id":"e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23500000"},{"id":"459651132a1115239f7370765464a0737d028ae7e74c68360740d81751fbae7e","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"429000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"299000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe61
6dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"2b265b24276433ee159053ff99aa8ba93f28906534ff4e53038b53ae53cec95b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220725155824-14919"],"size":"30"},{"id":"670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"711000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220725155824-14919"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.3"],"size":"51000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.3"],"size":"130000000"},{"id":"586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.3"],"size":"119000000"},{"id":"2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.3"],"size":"110000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"315
00000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls --format yaml:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.7
size: "711000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.3
size: "110000000"
- id: 3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.3
size: "51000000"
- id: 459651132a1115239f7370765464a0737d028ae7e74c68360740d81751fbae7e
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "429000000"
- id: 2b265b24276433ee159053ff99aa8ba93f28906534ff4e53038b53ae53cec95b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220725155824-14919
size: "30"
- id: aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "299000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23500000"
- id: d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.3
size: "130000000"
- id: 586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.3
size: "119000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh pgrep buildkitd: exit status 1 (445.428315ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image build -t localhost/my-image:functional-20220725155824-14919 testdata/build
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image build -t localhost/my-image:functional-20220725155824-14919 testdata/build: (2.721184073s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image build -t localhost/my-image:functional-20220725155824-14919 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 180a9db96834
Removing intermediate container 180a9db96834
---> ec14caa3aa62
Step 3/3 : ADD content.txt /
---> 246bbcb17d32
Successfully built 246bbcb17d32
Successfully tagged localhost/my-image:functional-20220725155824-14919
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.330461619s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220725155824-14919 docker-env) && out/minikube-darwin-amd64 status -p functional-20220725155824-14919"
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220725155824-14919 docker-env) && out/minikube-darwin-amd64 status -p functional-20220725155824-14919": (1.066984227s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220725155824-14919 docker-env) && docker images"
E0725 16:01:51.478906   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725155824-14919

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725155824-14919: (3.416964002s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725155824-14919

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725155824-14919: (2.15189609s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.31232624s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725155824-14919: (4.29593532s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image save gcr.io/google-containers/addon-resizer:functional-20220725155824-14919 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image save gcr.io/google-containers/addon-resizer:functional-20220725155824-14919 /Users/jenkins/workspace/addon-resizer-save.tar: (1.991913158s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image rm gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.38247112s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725155824-14919 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220725155824-14919: (2.598380678s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "461.852386ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "85.733355ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "515.644082ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "142.251347ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220725155824-14919 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220725155824-14919 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [6cd49169-3307-4721-9135-8e733921f73b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [6cd49169-3307-4721-9135-8e733921f73b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.007967063s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220725155824-14919 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220725155824-14919 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 18557: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220725155824-14919 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1092040973/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1658790165426474000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1092040973/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1658790165426474000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1092040973/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1658790165426474000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1092040973/001/test-1658790165426474000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (464.424619ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 25 23:02 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 25 23:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 25 23:02 test-1658790165426474000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh cat /mount-9p/test-1658790165426474000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220725155824-14919 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [1f15b551-ae00-487a-9afb-5145bd2c3f5e] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [1f15b551-ae00-487a-9afb-5145bd2c3f5e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [1f15b551-ae00-487a-9afb-5145bd2c3f5e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [1f15b551-ae00-487a-9afb-5145bd2c3f5e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00829044s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220725155824-14919 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220725155824-14919 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1092040973/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220725155824-14919 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2416724100/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (500.655286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220725155824-14919 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2416724100/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh "sudo umount -f /mount-9p": exit status 1 (422.010236ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220725155824-14919 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220725155824-14919 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2416724100/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.81s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220725155824-14919
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220725155824-14919
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220725155824-14919
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220725161043-14919 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0725 16:11:10.499343   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220725161043-14919 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (41.437784975s)
--- PASS: TestJSONOutput/start/Command (41.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220725161043-14919 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220725161043-14919 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220725161043-14919 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220725161043-14919 --output=json --user=testUser: (12.428056997s)
--- PASS: TestJSONOutput/stop/Command (12.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220725161140-14919 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220725161140-14919 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (328.859405ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d56490c9-51ae-47ed-a518-5010f9265e5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220725161140-14919] minikube v1.26.0 on Darwin 12.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"25a78084-6b20-4c8f-9255-f55c40dc6f85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14555"}}
	{"specversion":"1.0","id":"0c9803bf-b6b0-42e2-913b-583c9b4b7148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig"}}
	{"specversion":"1.0","id":"d45b9fda-f2af-4b19-86bd-58aba1f99102","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"103d514c-b958-4400-8d01-7e4fa93d8178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dbdefb07-182e-4a06-a5bd-c8b5d4557604","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube"}}
	{"specversion":"1.0","id":"8de401e4-6d8d-4d59-a14c-bb75aff49f3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220725161140-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220725161140-14919
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220725161141-14919 --network=
E0725 16:11:55.843693   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220725161141-14919 --network=: (28.413949213s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220725161141-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220725161141-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220725161141-14919: (2.758385178s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220725161212-14919 --network=bridge
E0725 16:12:23.536784   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220725161212-14919 --network=bridge: (27.830409851s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220725161212-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220725161212-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220725161212-14919: (2.590595619s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.49s)

                                                
                                    
x
+
TestKicExistingNetwork (31.25s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220725161243-14919 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220725161243-14919 --network=existing-network: (28.265499682s)
helpers_test.go:175: Cleaning up "existing-network-20220725161243-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220725161243-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220725161243-14919: (2.577559667s)
--- PASS: TestKicExistingNetwork (31.25s)

                                                
                                    
x
+
TestKicCustomSubnet (31.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220725161314-14919 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220725161314-14919 --subnet=192.168.60.0/24: (29.182623723s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220725161314-14919 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220725161314-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220725161314-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220725161314-14919: (2.738393632s)
--- PASS: TestKicCustomSubnet (31.99s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (66.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220725161346-14919 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220725161346-14919 --driver=docker : (29.126805059s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220725161346-14919 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220725161346-14919 --driver=docker : (29.391778265s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220725161346-14919
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220725161346-14919
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220725161346-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220725161346-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220725161346-14919: (2.732153807s)
helpers_test.go:175: Cleaning up "first-20220725161346-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220725161346-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220725161346-14919: (2.838017813s)
--- PASS: TestMinikubeProfile (66.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220725161452-14919 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220725161452-14919 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.881234875s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220725161452-14919 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220725161452-14919 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220725161452-14919 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.717578767s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220725161452-14919 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220725161452-14919 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220725161452-14919 --alsologtostderr -v=5: (2.272083071s)
--- PASS: TestMountStart/serial/DeleteFirst (2.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220725161452-14919 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.49s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220725161452-14919
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220725161452-14919: (1.627220935s)
--- PASS: TestMountStart/serial/Stop (1.63s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220725161452-14919
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220725161452-14919: (4.320057439s)
--- PASS: TestMountStart/serial/RestartStopped (5.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220725161452-14919 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725161522-14919 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0725 16:16:10.549666   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:16:55.842731   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220725161522-14919 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m46.293023836s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.700950204s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- rollout status deployment/busybox: (3.379043767s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-k9v2r -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-qw5hf -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-k9v2r -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-qw5hf -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-k9v2r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-qw5hf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-k9v2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-k9v2r -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-qw5hf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725161522-14919 -- exec busybox-d46db594c-qw5hf -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220725161522-14919 -v 3 --alsologtostderr
E0725 16:17:33.611542   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220725161522-14919 -v 3 --alsologtostderr: (33.565715185s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr: (1.213009302s)
--- PASS: TestMultiNode/serial/AddNode (34.78s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (17.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --output json --alsologtostderr: (1.128392113s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp testdata/cp-test.txt multinode-20220725161522-14919:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3299461908/001/cp-test_multinode-20220725161522-14919.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919:/home/docker/cp-test.txt multinode-20220725161522-14919-m02:/home/docker/cp-test_multinode-20220725161522-14919_multinode-20220725161522-14919-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m02 "sudo cat /home/docker/cp-test_multinode-20220725161522-14919_multinode-20220725161522-14919-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919:/home/docker/cp-test.txt multinode-20220725161522-14919-m03:/home/docker/cp-test_multinode-20220725161522-14919_multinode-20220725161522-14919-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m03 "sudo cat /home/docker/cp-test_multinode-20220725161522-14919_multinode-20220725161522-14919-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp testdata/cp-test.txt multinode-20220725161522-14919-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3299461908/001/cp-test_multinode-20220725161522-14919-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919-m02:/home/docker/cp-test.txt multinode-20220725161522-14919:/home/docker/cp-test_multinode-20220725161522-14919-m02_multinode-20220725161522-14919.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919 "sudo cat /home/docker/cp-test_multinode-20220725161522-14919-m02_multinode-20220725161522-14919.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919-m02:/home/docker/cp-test.txt multinode-20220725161522-14919-m03:/home/docker/cp-test_multinode-20220725161522-14919-m02_multinode-20220725161522-14919-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m03 "sudo cat /home/docker/cp-test_multinode-20220725161522-14919-m02_multinode-20220725161522-14919-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp testdata/cp-test.txt multinode-20220725161522-14919-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3299461908/001/cp-test_multinode-20220725161522-14919-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919-m03:/home/docker/cp-test.txt multinode-20220725161522-14919:/home/docker/cp-test_multinode-20220725161522-14919-m03_multinode-20220725161522-14919.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919 "sudo cat /home/docker/cp-test_multinode-20220725161522-14919-m03_multinode-20220725161522-14919.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 cp multinode-20220725161522-14919-m03:/home/docker/cp-test.txt multinode-20220725161522-14919-m02:/home/docker/cp-test_multinode-20220725161522-14919-m03_multinode-20220725161522-14919-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 ssh -n multinode-20220725161522-14919-m02 "sudo cat /home/docker/cp-test_multinode-20220725161522-14919-m03_multinode-20220725161522-14919-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (17.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 node stop m03: (12.486629352s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status: exit status 7 (901.076501ms)

                                                
                                                
-- stdout --
	multinode-20220725161522-14919
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220725161522-14919-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220725161522-14919-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr: exit status 7 (858.711462ms)

                                                
                                                
-- stdout --
	multinode-20220725161522-14919
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220725161522-14919-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220725161522-14919-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:18:22.704105   22318 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:18:22.704292   22318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:18:22.704297   22318 out.go:309] Setting ErrFile to fd 2...
	I0725 16:18:22.704301   22318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:18:22.704414   22318 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:18:22.704611   22318 out.go:303] Setting JSON to false
	I0725 16:18:22.704627   22318 mustload.go:65] Loading cluster: multinode-20220725161522-14919
	I0725 16:18:22.704935   22318 config.go:178] Loaded profile config "multinode-20220725161522-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:18:22.704944   22318 status.go:253] checking status of multinode-20220725161522-14919 ...
	I0725 16:18:22.705300   22318 cli_runner.go:164] Run: docker container inspect multinode-20220725161522-14919 --format={{.State.Status}}
	I0725 16:18:22.779279   22318 status.go:328] multinode-20220725161522-14919 host status = "Running" (err=<nil>)
	I0725 16:18:22.779317   22318 host.go:66] Checking if "multinode-20220725161522-14919" exists ...
	I0725 16:18:22.779583   22318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220725161522-14919
	I0725 16:18:22.852674   22318 host.go:66] Checking if "multinode-20220725161522-14919" exists ...
	I0725 16:18:22.852957   22318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:18:22.853016   22318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220725161522-14919
	I0725 16:18:22.927726   22318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59196 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/multinode-20220725161522-14919/id_rsa Username:docker}
	I0725 16:18:23.013985   22318 ssh_runner.go:195] Run: systemctl --version
	I0725 16:18:23.018322   22318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:18:23.027146   22318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220725161522-14919
	I0725 16:18:23.099926   22318 kubeconfig.go:92] found "multinode-20220725161522-14919" server: "https://127.0.0.1:59200"
	I0725 16:18:23.099954   22318 api_server.go:165] Checking apiserver status ...
	I0725 16:18:23.099991   22318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 16:18:23.109820   22318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1651/cgroup
	W0725 16:18:23.118293   22318 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1651/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 16:18:23.118311   22318 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59200/healthz ...
	I0725 16:18:23.124188   22318 api_server.go:266] https://127.0.0.1:59200/healthz returned 200:
	ok
	I0725 16:18:23.124203   22318 status.go:419] multinode-20220725161522-14919 apiserver status = Running (err=<nil>)
	I0725 16:18:23.124211   22318 status.go:255] multinode-20220725161522-14919 status: &{Name:multinode-20220725161522-14919 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 16:18:23.124224   22318 status.go:253] checking status of multinode-20220725161522-14919-m02 ...
	I0725 16:18:23.124472   22318 cli_runner.go:164] Run: docker container inspect multinode-20220725161522-14919-m02 --format={{.State.Status}}
	I0725 16:18:23.196930   22318 status.go:328] multinode-20220725161522-14919-m02 host status = "Running" (err=<nil>)
	I0725 16:18:23.196951   22318 host.go:66] Checking if "multinode-20220725161522-14919-m02" exists ...
	I0725 16:18:23.197198   22318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220725161522-14919-m02
	I0725 16:18:23.268624   22318 host.go:66] Checking if "multinode-20220725161522-14919-m02" exists ...
	I0725 16:18:23.268907   22318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 16:18:23.268952   22318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220725161522-14919-m02
	I0725 16:18:23.342252   22318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59324 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/machines/multinode-20220725161522-14919-m02/id_rsa Username:docker}
	I0725 16:18:23.428547   22318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 16:18:23.437570   22318 status.go:255] multinode-20220725161522-14919-m02 status: &{Name:multinode-20220725161522-14919-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0725 16:18:23.437591   22318 status.go:253] checking status of multinode-20220725161522-14919-m03 ...
	I0725 16:18:23.437829   22318 cli_runner.go:164] Run: docker container inspect multinode-20220725161522-14919-m03 --format={{.State.Status}}
	I0725 16:18:23.509316   22318 status.go:328] multinode-20220725161522-14919-m03 host status = "Stopped" (err=<nil>)
	I0725 16:18:23.509340   22318 status.go:341] host is not running, skipping remaining checks
	I0725 16:18:23.509359   22318 status.go:255] multinode-20220725161522-14919-m03 status: &{Name:multinode-20220725161522-14919-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 node start m03 --alsologtostderr: (18.702161987s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status: (1.130465149s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (136.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220725161522-14919
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220725161522-14919
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220725161522-14919: (37.023295466s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725161522-14919 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220725161522-14919 --wait=true -v=8 --alsologtostderr: (1m39.763705263s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220725161522-14919
--- PASS: TestMultiNode/serial/RestartKeepsNodes (136.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 node delete m03
E0725 16:21:10.549184   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 node delete m03: (16.476847392s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.468133295s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 stop: (24.784595198s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status: exit status 7 (178.382942ms)

                                                
                                                
-- stdout --
	multinode-20220725161522-14919
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220725161522-14919-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr: exit status 7 (179.855421ms)

                                                
                                                
-- stdout --
	multinode-20220725161522-14919
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220725161522-14919-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 16:21:44.231724   22982 out.go:296] Setting OutFile to fd 1 ...
	I0725 16:21:44.231928   22982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:21:44.231934   22982 out.go:309] Setting ErrFile to fd 2...
	I0725 16:21:44.231937   22982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 16:21:44.232071   22982 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/bin
	I0725 16:21:44.232251   22982 out.go:303] Setting JSON to false
	I0725 16:21:44.232266   22982 mustload.go:65] Loading cluster: multinode-20220725161522-14919
	I0725 16:21:44.232552   22982 config.go:178] Loaded profile config "multinode-20220725161522-14919": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0725 16:21:44.232566   22982 status.go:253] checking status of multinode-20220725161522-14919 ...
	I0725 16:21:44.232923   22982 cli_runner.go:164] Run: docker container inspect multinode-20220725161522-14919 --format={{.State.Status}}
	I0725 16:21:44.296556   22982 status.go:328] multinode-20220725161522-14919 host status = "Stopped" (err=<nil>)
	I0725 16:21:44.296587   22982 status.go:341] host is not running, skipping remaining checks
	I0725 16:21:44.296598   22982 status.go:255] multinode-20220725161522-14919 status: &{Name:multinode-20220725161522-14919 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 16:21:44.296634   22982 status.go:253] checking status of multinode-20220725161522-14919-m02 ...
	I0725 16:21:44.296926   22982 cli_runner.go:164] Run: docker container inspect multinode-20220725161522-14919-m02 --format={{.State.Status}}
	I0725 16:21:44.361141   22982 status.go:328] multinode-20220725161522-14919-m02 host status = "Stopped" (err=<nil>)
	I0725 16:21:44.361163   22982 status.go:341] host is not running, skipping remaining checks
	I0725 16:21:44.361167   22982 status.go:255] multinode-20220725161522-14919-m02 status: &{Name:multinode-20220725161522-14919-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725161522-14919 --wait=true -v=8 --alsologtostderr --driver=docker 
E0725 16:21:55.842045   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220725161522-14919 --wait=true -v=8 --alsologtostderr --driver=docker : (55.599163463s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725161522-14919 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.54410665s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220725161522-14919
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725161522-14919-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220725161522-14919-m02 --driver=docker : exit status 14 (369.948324ms)

                                                
                                                
-- stdout --
	* [multinode-20220725161522-14919-m02] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220725161522-14919-m02' is duplicated with machine name 'multinode-20220725161522-14919-m02' in profile 'multinode-20220725161522-14919'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725161522-14919-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220725161522-14919-m03 --driver=docker : (28.310297622s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220725161522-14919
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220725161522-14919: exit status 80 (536.220472ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220725161522-14919
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220725161522-14919-m03 already exists in multinode-20220725161522-14919-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220725161522-14919-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220725161522-14919-m03: (2.788592423s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.06s)

                                                
                                    
x
+
TestScheduledStopUnix (103.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220725162744-14919 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220725162744-14919 --memory=2048 --driver=docker : (29.283406114s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220725162744-14919 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220725162744-14919 -n scheduled-stop-20220725162744-14919
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220725162744-14919 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220725162744-14919 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220725162744-14919 -n scheduled-stop-20220725162744-14919
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220725162744-14919
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220725162744-14919 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220725162744-14919
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220725162744-14919: exit status 7 (121.669896ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220725162744-14919
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220725162744-14919 -n scheduled-stop-20220725162744-14919
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220725162744-14919 -n scheduled-stop-20220725162744-14919: exit status 7 (114.47303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220725162744-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220725162744-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220725162744-14919: (2.416869644s)
--- PASS: TestScheduledStopUnix (103.75s)

                                                
                                    
x
+
TestSkaffold (63.6s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe621688513 version
skaffold_test.go:63: skaffold version: v1.39.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220725162928-14919 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220725162928-14919 --memory=2600 --driver=docker : (30.087117296s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe621688513 run --minikube-profile skaffold-20220725162928-14919 --kube-context skaffold-20220725162928-14919 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe621688513 run --minikube-profile skaffold-20220725162928-14919 --kube-context skaffold-20220725162928-14919 --status-check=true --port-forward=false --interactive=false: (18.940237997s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-6f9f84c6c6-fx4tw" [57e71591-511d-4c50-aac8-0fdce8ba71cc] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013696201s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-8567fcc74b-slkxh" [ef4957d0-31a8-48af-bc8d-112fcd81532f] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007219202s
helpers_test.go:175: Cleaning up "skaffold-20220725162928-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220725162928-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220725162928-14919: (3.001596235s)
--- PASS: TestSkaffold (63.60s)

                                                
                                    
x
+
TestInsufficientStorage (13.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220725163031-14919 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220725163031-14919 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (10.00310658s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0c964731-3bad-4039-b6a2-b34a491a5717","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220725163031-14919] minikube v1.26.0 on Darwin 12.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3bc3440-e377-47a8-a04d-8cf757621a28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14555"}}
	{"specversion":"1.0","id":"d54a7677-d107-4cf2-ae87-354da7267670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig"}}
	{"specversion":"1.0","id":"a4424a61-7c9d-411e-99cb-2a81f896ce84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"9c6f2d6f-1def-4076-85d9-86e4bfda4eea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"45ab5d00-bc1e-4999-b226-e357becada9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube"}}
	{"specversion":"1.0","id":"55f12a04-759e-4fb8-9e49-00c8c531727d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a2e8c0a5-ce06-4082-b58b-5b275534999c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"99f21ab5-ce4c-4779-af20-03a824167c2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"41edcb5e-92ea-4c44-95e1-d9dacbef8a45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"df6d445f-085a-4d4d-a9e0-6a5365fbba3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220725163031-14919 in cluster insufficient-storage-20220725163031-14919","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"410959f6-4716-4cb8-9110-f099a55dc698","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc55cc6b-a665-4e0c-803f-f66dc132aa21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"08263f8c-915e-495e-b32d-a84e3540f96b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220725163031-14919 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220725163031-14919 --output=json --layout=cluster: exit status 7 (460.935813ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220725163031-14919","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220725163031-14919","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:30:42.398090   24576 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220725163031-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220725163031-14919 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220725163031-14919 --output=json --layout=cluster: exit status 7 (445.29563ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220725163031-14919","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220725163031-14919","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 16:30:42.844625   24586 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220725163031-14919" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	E0725 16:30:42.853093   24586 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/insufficient-storage-20220725163031-14919/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220725163031-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220725163031-14919
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220725163031-14919: (2.496987041s)
--- PASS: TestInsufficientStorage (13.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14555
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4151203322/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4151203322/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4151203322/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4151203322/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14555
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3753787148/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3753787148/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3753787148/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3753787148/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220725163620-14919
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220725163620-14919: (3.562957397s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.56s)

                                                
                                    
x
+
TestPause/serial/Start (44.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220725163713-14919 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220725163713-14919 --memory=2048 --install-addons=false --wait=all --driver=docker : (44.196901157s)
--- PASS: TestPause/serial/Start (44.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220725163713-14919 --alsologtostderr -v=1 --driver=docker 
E0725 16:38:02.774650   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220725163713-14919 --alsologtostderr -v=1 --driver=docker : (41.915276309s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.93s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220725163713-14919 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (584.223282ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220725163945-14919] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --driver=docker 
E0725 16:39:58.980667   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --driver=docker : (29.830905622s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220725163945-14919 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220725163045-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220725163045-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (46.536257091s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --no-kubernetes --driver=docker 
E0725 16:40:18.924402   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --no-kubernetes --driver=docker : (14.601005224s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220725163945-14919 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220725163945-14919 status -o json: exit status 2 (454.534654ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220725163945-14919","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220725163945-14919
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220725163945-14919: (2.664523206s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --no-kubernetes --driver=docker : (6.859317783s)
--- PASS: TestNoKubernetes/serial/Start (6.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220725163945-14919 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220725163945-14919 "sudo systemctl is-active --quiet service kubelet": exit status 1 (435.685261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220725163945-14919
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220725163945-14919: (1.677361682s)
--- PASS: TestNoKubernetes/serial/Stop (1.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --driver=docker 
E0725 16:40:46.616437   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220725163945-14919 --driver=docker : (4.530080237s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220725163945-14919 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220725163945-14919 "sudo systemctl is-active --quiet service kubelet": exit status 1 (453.550543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220725163046-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220725163046-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (51.209165271s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220725163045-14919 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (59.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220725163045-14919 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220725163045-14919 replace --force -f testdata/netcat-deployment.yaml: (1.612481924s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-dv25t" [ca9ee8fa-705e-4e85-a3a1-93227838136c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0725 16:41:10.633393   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-dv25t" [ca9ee8fa-705e-4e85-a3a1-93227838136c] Running
E0725 16:41:55.927148   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 58.300964584s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (59.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-qss86" [b727a4e9-a3b1-4d9a-9f27-b81cb6d3e3dd] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013754366s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220725163046-14919 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220725163046-14919 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220725163046-14919 replace --force -f testdata/netcat-deployment.yaml: (1.914326178s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-drgvc" [078b9a77-3c7d-4623-b96c-df8353638f76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-drgvc" [078b9a77-3c7d-4623-b96c-df8353638f76] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.008295895s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220725163045-14919 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.122143876s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220725163046-14919 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (84.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220725163046-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220725163046-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m24.373356926s)
--- PASS: TestNetworkPlugins/group/cilium/Start (84.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220725163046-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220725163046-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m34.188184969s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-xb4cr" [3b1282a8-0251-45db-97b8-4e80d11df9bf] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.021343311s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220725163046-14919 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (14.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220725163046-14919 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220725163046-14919 replace --force -f testdata/netcat-deployment.yaml: (2.281998087s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-l9p26" [02c7d50e-b75b-4b97-9c7a-4cb2fdcec997] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-l9p26" [02c7d50e-b75b-4b97-9c7a-4cb2fdcec997] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.007774848s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (14.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-sbz7c" [e453e785-c85d-4c00-9f39-80b67347277c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.022430553s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220725163046-14919 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220725163046-14919 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context calico-20220725163046-14919 replace --force -f testdata/netcat-deployment.yaml: (1.770994111s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-xdk4j" [7c556b48-bae0-4adc-842a-7b9f38f48e68] Pending
helpers_test.go:342: "netcat-869c55b6dc-xdk4j" [7c556b48-bae0-4adc-842a-7b9f38f48e68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-xdk4j" [7c556b48-bae0-4adc-842a-7b9f38f48e68] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.017172401s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220725163046-14919 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (49.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220725163046-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220725163046-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (49.237337811s)
--- PASS: TestNetworkPlugins/group/false/Start (49.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220725163046-14919 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220725163045-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220725163045-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (47.800268248s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220725163046-14919 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220725163046-14919 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220725163046-14919 replace --force -f testdata/netcat-deployment.yaml: (2.191976161s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-nm6tf" [034c2572-b035-4d03-ba68-dfcf2b20be0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-nm6tf" [034c2572-b035-4d03-ba68-dfcf2b20be0f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.016352688s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220725163045-14919 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220725163045-14919 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context bridge-20220725163045-14919 replace --force -f testdata/netcat-deployment.yaml: (1.715244591s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-rfkss" [00c7323a-1bf3-424b-8438-2cffff55e699] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-rfkss" [00c7323a-1bf3-424b-8438-2cffff55e699] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.006076639s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220725163046-14919 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220725163046-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.121708458s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220725163045-14919 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220725163045-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220725163045-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (46.734513193s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220725163045-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E0725 16:45:18.927096   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220725163045-14919 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (45.719352692s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220725163045-14919 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220725163045-14919 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220725163045-14919 replace --force -f testdata/netcat-deployment.yaml: (1.728885466s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-j4sd7" [fe531869-34d9-4792-a8fe-45733b8cf030] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-j4sd7" [fe531869-34d9-4792-a8fe-45733b8cf030] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.028632924s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220725163045-14919 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220725163045-14919 replace --force -f testdata/netcat-deployment.yaml: (2.042825613s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-s8ztt" [72875a54-64d2-46b2-8972-f0858b471882] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0725 16:45:59.417272   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:45:59.423262   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:45:59.433442   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:45:59.455352   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:45:59.496798   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:45:59.577020   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:45:59.737183   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:46:00.057440   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:46:00.697839   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:46:01.978078   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-s8ztt" [72875a54-64d2-46b2-8972-f0858b471882] Running
E0725 16:46:04.540322   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.012564456s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220725163045-14919 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220725163045-14919 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220725164719-14919 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3
E0725 16:47:21.345327   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:47:24.920188   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:05.882104   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220725164719-14919 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3: (56.234541132s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220725164719-14919 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context no-preload-20220725164719-14919 create -f testdata/busybox.yaml: (1.652554928s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [748cbe15-a042-4071-bb81-462c0c5cdc05] Pending
helpers_test.go:342: "busybox" [748cbe15-a042-4071-bb81-462c0c5cdc05] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [748cbe15-a042-4071-bb81-462c0c5cdc05] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.015325855s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220725164719-14919 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220725164719-14919 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220725164719-14919 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220725164719-14919 --alsologtostderr -v=3
E0725 16:48:30.405919   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:30.412369   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:30.422675   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:30.444959   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:30.486246   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:30.566728   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:30.727048   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:31.047224   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:31.688841   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:32.971127   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:35.533557   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220725164719-14919 --alsologtostderr -v=3: (12.574731528s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919: exit status 7 (116.619534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220725164719-14919 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (301.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220725164719-14919 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3
E0725 16:48:40.654693   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:41.204500   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:41.209600   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:41.219805   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:41.239994   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:41.280426   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:41.360604   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:41.521300   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:41.841418   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:42.481633   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:43.267562   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
E0725 16:48:43.761905   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:46.322288   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:50.895050   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:48:51.442798   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:01.684347   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:11.384321   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:22.185986   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:27.828285   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:45.821778   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:45.827531   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:45.839619   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:45.859889   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:45.900443   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:45.981938   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:46.142154   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:46.463637   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:47.106005   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:48.389651   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:50.950861   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:52.368038   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:53.070933   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:53.077093   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:53.089277   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:53.109554   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:53.151721   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:53.233158   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:53.393598   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:53.714507   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:54.355305   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:55.635567   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:49:56.071741   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:49:58.196728   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:03.158319   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:50:03.317649   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:06.313508   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:50:13.558353   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:50:18.963672   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220725164719-14919 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3: (5m1.291062598s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725164719-14919 -n no-preload-20220725164719-14919
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (301.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220725164610-14919 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220725164610-14919 --alsologtostderr -v=3: (1.677486086s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725164610-14919 -n old-k8s-version-20220725164610-14919: exit status 7 (119.167642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220725164610-14919 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-9c5cf" [fb1410cc-4f8d-414e-abf8-64f2efff1852] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-9c5cf" [fb1410cc-4f8d-414e-abf8-64f2efff1852] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.051749709s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-9c5cf" [fb1410cc-4f8d-414e-abf8-64f2efff1852] Running
E0725 16:53:58.132356   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00596934s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220725164719-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context no-preload-20220725164719-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.592494612s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220725164719-14919 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220725165448-14919 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3
E0725 16:54:53.072947   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
E0725 16:55:13.521974   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:55:18.966065   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/skaffold-20220725162928-14919/client.crt: no such file or directory
E0725 16:55:20.766062   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220725165448-14919 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3: (50.174340102s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220725165448-14919 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context embed-certs-20220725165448-14919 create -f testdata/busybox.yaml: (1.586884465s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [44ec7088-4b32-4d7f-b370-588c7fbfe01a] Pending
helpers_test.go:342: "busybox" [44ec7088-4b32-4d7f-b370-588c7fbfe01a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [44ec7088-4b32-4d7f-b370-588c7fbfe01a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.012595089s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220725165448-14919 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220725165448-14919 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220725165448-14919 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220725165448-14919 --alsologtostderr -v=3
E0725 16:55:55.134452   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:55:57.229974   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:55:59.456149   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/auto-20220725163045-14919/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220725165448-14919 --alsologtostderr -v=3: (12.693947944s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919: exit status 7 (117.878167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220725165448-14919 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220725165448-14919 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3
E0725 16:56:10.674074   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/addons-20220725155351-14919/client.crt: no such file or directory
E0725 16:56:22.825162   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/enable-default-cni-20220725163045-14919/client.crt: no such file or directory
E0725 16:56:24.923487   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kubenet-20220725163045-14919/client.crt: no such file or directory
E0725 16:56:39.022817   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:56:43.989951   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory
E0725 16:56:55.967246   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/functional-20220725155824-14919/client.crt: no such file or directory
E0725 16:58:17.275348   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:17.280469   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:17.291614   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:17.312552   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:17.353559   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:17.434346   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:17.594518   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:17.916091   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:18.558439   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:19.839973   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:22.402315   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:27.522928   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:30.443579   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/cilium-20220725163046-14919/client.crt: no such file or directory
E0725 16:58:37.763255   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:58:41.242102   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/calico-20220725163046-14919/client.crt: no such file or directory
E0725 16:58:58.244514   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:59:39.204951   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/no-preload-20220725164719-14919/client.crt: no such file or directory
E0725 16:59:45.830659   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/false-20220725163046-14919/client.crt: no such file or directory
E0725 16:59:53.075493   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/bridge-20220725163045-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220725165448-14919 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3: (5m1.967564426s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725165448-14919 -n embed-certs-20220725165448-14919
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-kxp9z" [55753ac7-fd73-4470-be9e-0e5b0e8d250e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-kxp9z" [55753ac7-fd73-4470-be9e-0e5b0e8d250e] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.014618299s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-kxp9z" [55753ac7-fd73-4470-be9e-0e5b0e8d250e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00546326s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220725165448-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context embed-certs-20220725165448-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.584848743s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220725165448-14919 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (46.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220725170207-14919 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220725170207-14919 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3: (46.253236289s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (46.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220725170207-14919 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context default-k8s-different-port-20220725170207-14919 create -f testdata/busybox.yaml: (1.613059641s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [fe04ab73-155d-41ee-aa9e-b715a5db3077] Pending
helpers_test.go:342: "busybox" [fe04ab73-155d-41ee-aa9e-b715a5db3077] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [fe04ab73-155d-41ee-aa9e-b715a5db3077] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.012229134s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220725170207-14919 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220725170207-14919 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220725170207-14919 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220725170207-14919 --alsologtostderr -v=3
E0725 17:03:07.044837   14919 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-13765-9b4ecbb2d2dd64a0f495a0351a574dab999c1b37/.minikube/profiles/kindnet-20220725163046-14919/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220725170207-14919 --alsologtostderr -v=3: (12.627250687s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919: exit status 7 (119.757774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220725170207-14919 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (306.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220725170207-14919 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220725170207-14919 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3: (5m5.677448997s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725170207-14919 -n default-k8s-different-port-20220725170207-14919
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (306.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-lxsld" [47fb3e0a-7080-462a-910d-d9820f6f9eb2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-lxsld" [47fb3e0a-7080-462a-910d-d9820f6f9eb2] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.016364201s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-lxsld" [47fb3e0a-7080-462a-910d-d9820f6f9eb2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009503294s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220725170207-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context default-k8s-different-port-20220725170207-14919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.796583272s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220725170207-14919 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220725170926-14919 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220725170926-14919 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3: (44.273718915s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220725170926-14919 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220725170926-14919 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220725170926-14919 --alsologtostderr -v=3: (12.588004011s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919: exit status 7 (118.458638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220725170926-14919 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220725170926-14919 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220725170926-14919 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3: (18.130505565s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725170926-14919 -n newest-cni-20220725170926-14919
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220725170926-14919 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.57s)

                                                
                                    

Test skip (18/289)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 12.359599ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-nxhnj" [382abc54-b958-4667-8b97-6d89ea0861f3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008685217s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-s95gs" [f95438f2-c603-4d2c-90ca-2d9cb6a47d93] Running
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00840688s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220725155351-14919 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) Done: kubectl --context addons-20220725155351-14919 delete po -l run=registry-test --now: (2.937676709s)
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220725155351-14919 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220725155351-14919 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.827880396s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (17.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220725155351-14919 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220725155351-14919 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220725155351-14919 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [13b3119d-8d66-484a-be5f-11e0f386474b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [13b3119d-8d66-484a-be5f-11e0f386474b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.010860524s
addons_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725155351-14919 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.87s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220725155824-14919 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220725155824-14919 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-2pqrz" [a6cc9979-8d80-426e-a141-ab5637814702] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-2pqrz" [a6cc9979-8d80-426e-a141-ab5637814702] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.008808915s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220725163045-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220725163045-14919
--- SKIP: TestNetworkPlugins/group/flannel (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220725163046-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220725163046-14919
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.59s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220725170207-14919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220725170207-14919
--- SKIP: TestStartStop/group/disable-driver-mounts (0.47s)

                                                
                                    
Copied to clipboard